Persons

Georgios Kordopatis-Zilos, Ph.D.

All publications

Improving Synthetically Generated Image Detection in Cross-Concept Settings

  • Authors: Dogoulis, P., Georgios Kordopatis-Zilos, Ph.D., Kompatsiaris, I., Papadopoulos, S.
  • Publication: 2nd ACM International Workshop on Multimedia AI against Disinformation (MAD '23). New York: ACM, 2023. p. 28-35. ISBN 979-8-4007-0187-0.
  • Year: 2023
  • DOI: 10.1145/3592572.3592846
  • Link: https://doi.org/10.1145/3592572.3592846
  • Department: Visual Recognition Group
  • Annotation:
    New advancements for the detection of synthetic images are critical for fighting disinformation, as the capabilities of generative AI models continuously evolve and can lead to hyper-realistic synthetic imagery at unprecedented scale and speed. In this paper, we focus on the challenge of generalizing across different concept classes, e.g., when training a detector on human faces and testing on synthetic animal images - highlighting the ineffectiveness of existing approaches that randomly sample generated images to train their models. By contrast, we propose an approach based on the premise that the robustness of the detector can be enhanced by training it on realistic synthetic images that are selected based on their quality scores according to a probabilistic quality estimation model. We demonstrate the effectiveness of the proposed approach by conducting experiments with generated images from two seminal architectures, StyleGAN2 and Latent Diffusion, and using three different concepts for each, so as to measure the cross-concept generalization ability. Our results show that our quality-based sampling method leads to higher detection performance for nearly all concepts, improving the overall effectiveness of the synthetic image detectors.

Self-Supervised Video Similarity Learning

  • Authors: Georgios Kordopatis-Zilos, Ph.D., doc. Georgios Tolias, Ph.D., Tzelepis, C., Kompatsiaris, I., Patras, I., Papadopoulos, S.
  • Publication: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Whorkshops (CVPRW). USA: IEEE Computer Society, 2023. p. 4756-4766. ISSN 2160-7516. ISBN 979-8-3503-0249-3.
  • Year: 2023
  • DOI: 10.1109/CVPRW59228.2023.00504
  • Link: https://doi.org/10.1109/CVPRW59228.2023.00504
  • Department: Visual Recognition Group
  • Annotation:
    We introduce S2VS, a video similarity learning approach with self-supervision. Self-Supervised Learning (SSL) is typically used to train deep models on a proxy task so as to have strong transferability on target tasks after fine-tuning. Here, in contrast to prior work, SSL is used to perform video similarity learning and address multiple retrieval and detection tasks at once with no use of labeled data. This is achieved by learning via instance-discrimination with task-tailored augmentations and the widely used InfoNCE loss together with an additional loss operating jointly on self-similarity and hard-negative similarity. We benchmark our method on tasks where video relevance is defined with varying granularity, ranging from video copies to videos depicting the same incident or event. We learn a single universal model that achieves state-of-the-art performance on all tasks, surpassing previously proposed methods that use labeled data. The code and pretrained models are publicly available at: https://github.com/gkordo/s2vs

Test-time Training for Matching-based Video Object Segmentation

  • Department: Visual Recognition Group
  • Annotation:
    The video object segmentation (VOS) task involves the segmentation of an object over time based on a single initial mask. Current state-of-the-art approaches use a memory of previously processed frames and rely on matching to estimate segmentation masks of subsequent frames. Lacking any adaptation mechanism, such methods are prone to test-time distribution shifts. This work focuses on matching-based VOS under distribution shifts such as video corruptions, stylization, and sim-to-real transfer. We explore test-time training strategies that are agnostic to the specific task as well as strategies that are designed specifically for VOS. This includes a variant based on mask cycle consistency tailored to matching-based VOS methods. The experimental results on common benchmarks demonstrate that the proposed test-time training yields significant improvements in performance. In particular for the sim-to-real scenario and despite using only a single test video, our approach manages to recover a substantial portion of the performance gain achieved through training on real videos. Additionally, we introduce DAVIS-C, an augmented version of the popular DAVIS test set, featuring extreme distribution shifts like image-/video-level corruptions and stylizations. Our results illustrate that test-time training enhances performance even in these challenging cases.

Responsible person Ing. Mgr. Radovan Suk