Persons

Georgios Kordopatis-Zilos, Ph.D.

All publications

AMES: Asymmetric and Memory-Efficient Similarity Estimation for Instance-Level Retrieval

  • DOI: 10.1007/978-3-031-73202-7_18
  • Link: https://doi.org/10.1007/978-3-031-73202-7_18
  • Department: Visual Recognition Group
  • Annotation:
    This work investigates the problem of instance-level image retrieval re-ranking with the constraint of memory efficiency, ultimately aiming to limit memory usage to 1KB per image. Departing from the prevalent focus on performance enhancements, this work prioritizes the crucial trade-off between performance and memory requirements. The proposed model uses a transformer-based architecture designed to estimate image-to-image similarity by capturing interactions within and across images based on their local descriptors. A distinctive property of the model is the capability for asymmetric similarity estimation. Database images are represented with a smaller number of descriptors compared to query images, enabling performance improvements without increasing memory consumption. To ensure adaptability across different applications, a universal model is introduced that adjusts to a varying number of local descriptors during the testing phase. Results on standard benchmarks demonstrate the superiority of our approach over both hand-crafted and learned models. In particular, compared with current state-of-the-art methods that overlook their memory footprint, our approach not only attains superior performance but does so with a significantly reduced memory footprint. The code and pretrained models are publicly available at: https://github.com/pavelsuma/ames

MAD '24Workshop: Multimedia AI against Disinformation

  • Authors: Stanciu, C., Ionescu, B., Cuccovillo, L., Papadopoulos, S., Georgios Kordopatis-Zilos, Ph.D.,
  • Publication: 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD '24). New York: ACM, 2024. p. 1339-1341. ISBN 979-8-4007-0602-8.
  • Year: 2024
  • DOI: 10.1145/3652583.3660000
  • Link: https://doi.org/10.1145/3652583.3660000
  • Department: Visual Recognition Group
  • Annotation:
    Synthetic media generation and manipulation have seen rapid advancements in recent years, making it increasingly easy to create multimedia content that is indistinguishable to the human observer. Moreover, generated content can be used maliciously by individuals and organizations in order to spread disinformation, posing a significant threat to society and democracy. Hence, there is an urgent need for AI tools geared towards facilitating a timely and effective media verification process. The MAD'24 workshop seeks to bring together people with diverse backgrounds who are dedicated to combating disinformation in multimedia through the means of AI, by fostering an environment for exploring innovative ideas and sharing experiences. The research areas of interest encompass the identification of manipulated or generated content, along with the investigation of the dissemination of disinformation and its societal repercussions. Recognizing the significance of multimedia, the workshop emphasizes the joint analysis of various modalities within content, as verification can be improved by aggregating multiple forms of content.

Improving Synthetically Generated Image Detection in Cross-Concept Settings

  • Authors: Dogoulis, P., Georgios Kordopatis-Zilos, Ph.D., Kompatsiaris, I., Papadopoulos, S.
  • Publication: 2nd ACM International Workshop on Multimedia AI against Disinformation (MAD '23). New York: ACM, 2023. p. 28-35. ISBN 979-8-4007-0178-8.
  • Year: 2023
  • DOI: 10.1145/3592572.3592846
  • Link: https://doi.org/10.1145/3592572.3592846
  • Department: Visual Recognition Group
  • Annotation:
    New advancements for the detection of synthetic images are critical for fighting disinformation, as the capabilities of generative AI models continuously evolve and can lead to hyper-realistic synthetic imagery at unprecedented scale and speed. In this paper, we focus on the challenge of generalizing across different concept classes, e.g., when training a detector on human faces and testing on synthetic animal images - highlighting the ineffectiveness of existing approaches that randomly sample generated images to train their models. By contrast, we propose an approach based on the premise that the robustness of the detector can be enhanced by training it on realistic synthetic images that are selected based on their quality scores according to a probabilistic quality estimation model. We demonstrate the effectiveness of the proposed approach by conducting experiments with generated images from two seminal architectures, StyleGAN2 and Latent Diffusion, and using three different concepts for each, so as to measure the cross-concept generalization ability. Our results show that our quality-based sampling method leads to higher detection performance for nearly all concepts, improving the overall effectiveness of the synthetic image detectors.

MAD '23 Workshop: Multimedia AI against Disinformation

  • Authors: Cuccovillo, L., Ionescu, B., Georgios Kordopatis-Zilos, Ph.D., Papadopoulos, S.
  • Publication: 2nd ACM International Workshop on Multimedia AI against Disinformation (MAD '23). New York: ACM, 2023. p. 676-677. ISBN 979-8-4007-0178-8.
  • Year: 2023
  • DOI: 10.1145/3591106.3592303
  • Link: https://doi.org/10.1145/3591106.3592303
  • Department: Visual Recognition Group
  • Annotation:
    With recent advancements in synthetic media manipulation and generation, verifying multimedia content posted online has become increasingly difficult. Additionally, the malicious exploitation of AI technologies by actors to disseminate disinformation on social media, and more generally the Web, at an alarming pace poses significant threats to society and democracy. Therefore, the development of AI-powered tools that facilitate media verification is urgently needed. The MAD '23 workshop aims to bring together individuals working on the wider topic of detecting disinformation in multimedia to exchange their experiences and discuss innovative ideas, attracting people with varying backgrounds and expertise. The research areas of interest include identifying manipulated and synthetic content in multimedia, as well as examining the dissemination of disinformation and its impact on society. The multimedia aspect is very important since content most often contains a mix of modalities and their joint analysis can boost the performance of verification methods.

Self-Supervised Video Similarity Learning

  • Authors: Georgios Kordopatis-Zilos, Ph.D., doc. Georgios Tolias, Ph.D., Tzelepis, C., Kompatsiaris, I., Patras, I., Papadopoulos, S.
  • Publication: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Whorkshops (CVPRW). USA: IEEE Computer Society, 2023. p. 4756-4766. ISSN 2160-7516. ISBN 979-8-3503-0249-3.
  • Year: 2023
  • DOI: 10.1109/CVPRW59228.2023.00504
  • Link: https://doi.org/10.1109/CVPRW59228.2023.00504
  • Department: Visual Recognition Group
  • Annotation:
    We introduce S2VS, a video similarity learning approach with self-supervision. Self-Supervised Learning (SSL) is typically used to train deep models on a proxy task so as to have strong transferability on target tasks after fine-tuning. Here, in contrast to prior work, SSL is used to perform video similarity learning and address multiple retrieval and detection tasks at once with no use of labeled data. This is achieved by learning via instance-discrimination with task-tailored augmentations and the widely used InfoNCE loss together with an additional loss operating jointly on self-similarity and hard-negative similarity. We benchmark our method on tasks where video relevance is defined with varying granularity, ranging from video copies to videos depicting the same incident or event. We learn a single universal model that achieves state-of-the-art performance on all tasks, surpassing previously proposed methods that use labeled data. The code and pretrained models are publicly available at: https://github.com/gkordo/s2vs

Test-time Training for Matching-based Video Object Segmentation

  • Department: Visual Recognition Group
  • Annotation:
    The video object segmentation (VOS) task involves the segmentation of an object over time based on a single initial mask. Current state-of-the-art approaches use a memory of previously processed frames and rely on matching to estimate segmentation masks of subsequent frames. Lacking any adaptation mechanism, such methods are prone to test-time distribution shifts. This work focuses on matching-based VOS under distribution shifts such as video corruptions, stylization, and sim-to-real transfer. We explore test-time training strategies that are agnostic to the specific task as well as strategies that are designed specifically for VOS. This includes a variant based on mask cycle consistency tailored to matching-based VOS methods. The experimental results on common benchmarks demonstrate that the proposed test-time training yields significant improvements in performance. In particular for the sim-to-real scenario and despite using only a single test video, our approach manages to recover a substantial portion of the performance gain achieved through training on real videos. Additionally, we introduce DAVIS-C, an augmented version of the popular DAVIS test set, featuring extreme distribution shifts like image-/video-level corruptions and stylizations. Our results illustrate that test-time training enhances performance even in these challenging cases.

Responsible person Ing. Mgr. Radovan Suk