Persons

Ing. Adéla Šubrtová

All publications

ChunkyGAN: Real Image Inversion via Segments

  • DOI: 10.1007/978-3-031-20050-2_12
  • Link: https://doi.org/10.1007/978-3-031-20050-2_12
  • Department: Department of Cybernetics, Department of Computer Graphics and Interaction, Visual Recognition Group
  • Annotation:
    We present ChunkyGAN—a novel paradigm for modeling and editing images using generative adversarial networks. Unlike previous techniques seeking a global latent representation of the input image, our approach subdivides the input image into a set of smaller components (chunks) specified either manually or automatically using a pre-trained segmentation network. For each chunk, the latent code of a generative network is estimated locally with greater accuracy thanks to a smaller number of constraints. Moreover, during the optimization of latent codes, segmentation can further be refined to improve matching quality. This process enables high-quality projection of the original image with spatial disentanglement that previous methods would find challenging to achieve. To demonstrate the advantage of our approach, we evaluated it quantitatively and also qualitatively in various image editing scenarios that benefit from the higher reconstruction quality and local nature of the approach. Our method is flexible enough to manipulate even out-of-domain images that would be hard to reconstruct using global techniques.

Hairstyle Transfer between Face Images

  • DOI: 10.1109/FG52635.2021.9667038
  • Link: https://doi.org/10.1109/FG52635.2021.9667038
  • Department: Visual Recognition Group, Machine Learning
  • Annotation:
    We propose a neural network which takes two inputs, a hair image and a face image, and produces an output image having the hair of the hair image seamlessly merged with the inner face of the face image. Our architecture consists of neural networks mapping the input images into a latent code of a pretrained StyleGAN2 which generates the output high-definition image. We propose an algorithm for training parameters of the architecture solely from synthetic images generated by the StyleGAN2 itself without the need of any annotations or external dataset of hairstyle images. We empirically demonstrate the effectiveness of our method in applications including hair-style transfer, hair generation for 3D morphable models, and hair-style interpolation. Fidelity of the generated images is verified by a user study and by a novel hairstyle metric proposed in the paper.

Responsible person Ing. Mgr. Radovan Suk