Lidé

prof. Ing. Daniel Sýkora, Ph.D.

Všechny publikace

ChunkyGAN: Real Image Inversion via Segments

  • DOI: 10.1007/978-3-031-20050-2_12
  • Odkaz: https://doi.org/10.1007/978-3-031-20050-2_12
  • Pracoviště: Katedra kybernetiky, Katedra počítačové grafiky a interakce, Skupina vizuálního rozpoznávání
  • Anotace:
    We present ChunkyGAN—a novel paradigm for modeling and editing images using generative adversarial networks. Unlike previous techniques seeking a global latent representation of the input image, our approach subdivides the input image into a set of smaller components (chunks) specified either manually or automatically using a pre-trained segmentation network. For each chunk, the latent code of a generative network is estimated locally with greater accuracy thanks to a smaller number of constraints. Moreover, during the optimization of latent codes, segmentation can further be refined to improve matching quality. This process enables high-quality projection of the original image with spatial disentanglement that previous methods would find challenging to achieve. To demonstrate the advantage of our approach, we evaluated it quantitatively and also qualitatively in various image editing scenarios that benefit from the higher reconstruction quality and local nature of the approach. Our method is flexible enough to manipulate even out-of-domain images that would be hard to reconstruct using global techniques.

StyleBin: Stylizing Video by Example in Stereo

  • Autoři: Kučera, M., Mould, D., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: SIGGRAPH Asia 2022 Conference Papers. New York: ACM SIGGRAPH, 2022. ISBN 9781450394703.
  • Rok: 2022
  • DOI: 10.1145/3550469.3555420
  • Odkaz: https://doi.org/10.1145/3550469.3555420
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    In this paper we present StyleBin—an approach to example-based stylization of videos that can produce consistent binocular depiction of stylized content on stereoscopic displays. Given the target sequence and a set of stylized keyframes accompanied by information about depth in the scene, we formulate an optimization problem that converts the target video into a pair of stylized sequences, in which each frame consists of a set of seamlessly stitched patches taken from the original stylized keyframe. The aim of the optimization process is to align the individual patches so that they respect the semantics of the given target scene, while at the same time also following the prescribed local disparity in the corresponding viewpoints and being consistent in time. In contrast to previous depth-aware style transfer techniques, our approach is the first that can deliver semantically meaningful stylization and preserve essential visual characteristics of the given artistic media. We demonstrate the practical utility of the proposed method in various stylization use cases.

FaceBlit: Instant Real-time Example-based Style Transfer to Facial Videos

  • Autoři: Texler, A., Texler, O., Kučera, M., Chai, M., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Proceedings of the ACM on Computer Graphics and Interactive Techniques. 2021, 4(1), ISSN 2577-6193.
  • Rok: 2021
  • DOI: 10.1145/3451270
  • Odkaz: https://doi.org/10.1145/3451270
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present FaceBlit—a system for real-time example-based face video stylization that retains textural details of the style in a semantically meaningful manner, i.e., strokes used to depict specific features in the style are present at the appropriate locations in the target image. As compared to previous techniques, our system preserves the identity of the target subject and runs in real-time without the need for large datasets nor lengthy training phase. To achieve this, we modify the existing face stylization pipeline of Fišer et al. [2017] so that it can quickly generate a set of guiding channels that handle identity preservation of the target subject while are still compatible with a faster variant of patch-based synthesis algorithm of Sýkora et al. [2019]. Thanks to these improvements we demonstrate a first face stylization pipeline that can instantly transfer artistic style from a single portrait to the target video at interactive rates even on mobile devices.

Fluidymation: Stylizing Animations Using Natural Dynamics of Artistic Media

  • DOI: 10.1111/cgf.14398
  • Odkaz: https://doi.org/10.1111/cgf.14398
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present Fluidymation—a new example-based approach to stylizing animation that employs the natural dynamics of artistic media to convey a prescribed motion. In contrast to previous stylization techniques that transfer the hand- painted appearance of a static style exemplar and then try to enforce temporal coherence, we use moving exemplars that capture the artistic medium’s inherent dynamic properties, and transfer both movement and appearance to reproduce natural-looking transitions between individual animation frames. Our approach can synthetically generate stylized sequences that look as if actual paint is diffusing across a canvas in the direction and speed of the target motion.

STALP: Style Transfer With Auxiliary Limited Pairing

  • Autoři: Futschik, D., Kučera, M., Lukáč, M., Wang, Z., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Computer Graphics Forum. 2021, 40(2), 563-573. ISSN 0167-7055.
  • Rok: 2021
  • DOI: 10.1111/cgf.142655
  • Odkaz: https://doi.org/10.1111/cgf.142655
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present an approach to example-based stylization of images that uses a single pair of a source image and its stylized counterpart. We demonstrate how to train an image translation network that can perform real-time semantically meaningful style transfer to a set of target images with similar content as the source image. A key added value of our approach is that it considers also consistency of target images during training. Although those have no stylized counterparts, we constrain the translation to keep the statistics of neural responses compatible with those extracted from the stylized source. In contrast to concurrent techniques that use a similar input, our approach better preserves important visual characteristics of the source style and can deliver temporally stable results without the need to explicitly handle temporal consistency. We demonstrate its practical utility on various applications including video stylization, style transfer to panoramas, faces, and 3D models.

Arbitrary style transfer using neurally-guided patch-based synthesis

  • Autoři: Texler, O., Futschik, D., Fišer, J., Lukáč, M., Lu, J., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Computers & Graphics. 2020, 87(1), 62-71. ISSN 0097-8493.
  • Rok: 2020
  • DOI: 10.1016/j.cag.2020.01.002
  • Odkaz: https://doi.org/10.1016/j.cag.2020.01.002
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new approach to example-based style transfer combining neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details. We demonstrate the power and generality of this approach on a novel stylization algorithm that delivers comparable visual quality to state-of-art neural style transfer while completely eschewing any purpose-trained stylization blocks and only using the response of a feature extractor as guidance for patch-based synthesis.

Interactive style transfer to live video streams

  • Autoři: Texler, O., Futschik, D., Kučera, M., Jamriška, O., Sochorová, Š., Chai, M., Tulyakov, S., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: SIGGRAPH '20: ACM SIGGRAPH 2020 Real-Time Live!. New York: ACM, 2020. ISBN 978-1-4503-8060-7.
  • Rok: 2020
  • DOI: 10.1145/3407662.3407752
  • Odkaz: https://doi.org/10.1145/3407662.3407752
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Our tool allows artists to create living paintings or stylize a live video stream using their own artwork with minimal effort. While an artist is painting the image, our framework learns their artistic style on the fly and transfers it to the provided live video stream in real time.

Interactive Video Stylization Using Few-Shot Patch-Based Training

  • Autoři: Texler, O., Futschik, D., Kučera, M., Jamriška, O., Sochorová, Š., Chai, M., Tulyakov, S., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2020, 39(4), ISSN 0730-0301.
  • Rok: 2020
  • DOI: 10.1145/3386569.3392453
  • Odkaz: https://doi.org/10.1145/3386569.3392453
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence. Its key advantage is that the resulting stylization is semantically meaningful, i.e., specific parts of moving objects are stylized according to the artist’s intention. In contrast to previous style transfer techniques, our approach does not require any lengthy pre-training process nor a large training dataset. We demonstrate how to train an appearance translation network from scratch using only a few stylized exemplars while implicitly preserving temporal consistency. This leads to a video stylization framework that supports real-time inference, parallel processing, and random access to an arbitrary output frame. It can also merge the content from multiple keyframes without the need to perform an explicit blending operation. We demonstrate its practical utility in various interactive scenarios, where the user paints over a selected keyframe and sees her style transferred to an existing recorded sequence or a live video stream.

Monster Mash: A Single-View Approach to Casual 3D Modeling and Animation

  • Autoři: Dvorožňák, M., prof. Ing. Daniel Sýkora, Ph.D., Curtis, C., Curless, B., Sorkine-Hornung, O., Salesin, D.
  • Publikace: ACM Transactions on Graphics (TOG). 2020, 39(6), ISSN 0730-0301.
  • Rok: 2020
  • DOI: 10.1145/3414685.3417805
  • Odkaz: https://doi.org/10.1145/3414685.3417805
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new framework for sketch-based modeling and animation of 3D organic shapes that can work entirely in an intuitive 2D domain, enabling a playful, casual experience. Unlike previous sketch-based tools, our approach does not require a tedious part-based multi-view workflow with the explicit specification of an animation rig. Instead, we combine 3D inflation with a novel rigidity-preserving, layered deformation model, ARAP-L, to produce a smooth 3D mesh that is immediately ready for animation. Moreover, the resulting model can be animated from a single viewpoint - and without the need to handle unwanted inter-penetrations, as required by previous approaches. We demonstrate the benefit of our approach on a variety of examples produced by inexperienced users as well as professional animators. For less experienced users, our single-view approach offers a simpler modeling and animating experience than working in a 3D environment, while for professionals, it offers a quick and casual workspace for ideation.

StyleProp: Real-time Example-based Stylization of 3D Models

  • Autoři: Hauptfleisch, F., Texler, O., Texler, A., Křivánek, J., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: COMPUTER GRAPHICS FORUM. 2020, 39(7), 575-586. ISSN 0167-7055.
  • Rok: 2020
  • DOI: 10.1111/cgf.14169
  • Odkaz: https://doi.org/10.1111/cgf.14169
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a novel approach to the real-time non-photorealistic rendering of 3D models in which a single hand-drawn exemplar specifies its appearance. We employ guided patch-based synthesis to achieve high visual quality as well as temporal coherence. However, unlike previous techniques that maintain consistency in one dimension (temporal domain), in our approach, multiple dimensions are taken into account to cover all degrees of freedom given by the available space of interactions (e.g., camera rotations). To enable interactive experience, we precalculate a sparse latent representation of the entire interaction space, which allows rendering of a stylized image in real-time, even on a mobile device. To the best of our knowledge, the proposed system is the first that enables interactive example-based stylization of 3D models with full temporal coherence in predefined interaction space.

Building anatomically realistic jaw kinematics model from data

  • Autoři: Yang, W., Marshak, N., prof. Ing. Daniel Sýkora, Ph.D., Ramalingam, S., Kavan, L.
  • Publikace: The Visual Computer. 2019, 35(6-8), 1105-1118. ISSN 0178-2789.
  • Rok: 2019
  • DOI: 10.1007/s00371-019-01677-8
  • Odkaz: https://doi.org/10.1007/s00371-019-01677-8
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Recent work on anatomical face modeling focuses mainly on facial muscles and their activation. This paper considers a different aspect of anatomical face modeling: kinematic modeling of the jaw, i.e., the temporomandibular joint (TMJ). Previous work often relies on simple models of jaw kinematics, even though the actual physiological behavior of the TMJ is quite complex, allowing not only for mouth opening, but also for some amount of sideways (lateral) and front-to-back (protrusion) motions. Fortuitously, the TMJ is the only joint whose kinematics can be accurately measured with optical methods, because the bones of the lower and upper jaw are rigidly connected to the lower and upper teeth. We construct a person-specific jaw kinematic model by asking an actor to exercise the entire range of motion of the jaw while keeping the lips open so that the teeth are at least partially visible. This performance is recorded with three calibrated cameras. We obtain highly accurate 3D models of the teeth with a standard dental scanner and use these models to reconstruct the rigid body trajectories of the teeth from the videos (markerless tracking). The relative rigid transformations samples between the lower and upper teeth are mapped to the Lie algebra of rigid body motions in order to linearize the rotational motion. Our main contribution is to fit these samples with a three-dimensional nonlinear model parameterizing the entire range of motion of the TMJ. We show that standard principal component analysis (PCA) fails to capture the nonlinear trajectories of the moving mandible. However, we found these nonlinearities can be captured with a special modification of autoencoder neural networks known as nonlinear PCA. By mapping back to the Lie group of rigid transformations, we obtain a parametrization of the jaw kinematics which provides an intuitive interface allowing the animators to explore realistic jaw motions in a user-friendly way.

Enhancing Neural Style Transfer using Patch-Based Synthesis

  • Autoři: Texler, O., Fišer, J., Lukáč, M., Lu, J., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Proceedings of the 8th ACM/EG Expressive Symposium. Aire-la-Ville: Eurographics Association, 2019. p. 43-50. ISBN 978-3-03868-078-9.
  • Rok: 2019
  • DOI: 10.2312/exp.20191075
  • Odkaz: https://doi.org/10.2312/exp.20191075
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new approach to example-based style transfer which combines neural methods with patch-based synthesis to achieve compelling stylization quality even for high-resolution imagery. We take advantage of neural techniques to provide adequate stylization at the global level and use their output as a prior for subsequent patch-based synthesis at the detail level. Thanks to this combination, our method keeps the high frequencies of the original artistic media better, thereby dramatically increases the fidelity of the resulting stylized imagery. We also show how to stylize extremely large images (e.g., 340 Mpix) without the need to run the synthesis at the pixel level, yet retaining the original high-frequency details.

Real-Time Patch-Based Stylization of Portraits Using Generative Adversarial Network

  • Autoři: Futschik, D., Chai, M., Cao, C., Ma, C., Stoliar, A., Korolev, S., Tulyakov, S., Kučera, M., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Proceedings of the 8th ACM/EG Expressive Symposium. Aire-la-Ville: Eurographics Association, 2019. p. 33-42. ISBN 978-3-03868-078-9.
  • Rok: 2019
  • DOI: 10.2312/exp.20191074
  • Odkaz: https://doi.org/10.2312/exp.20191074
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a learning-based style transfer algorithm for human portraits which significantly outperforms current state-of-the-art in computational overhead while still maintaining comparable visual quality. We show how to design a conditional generative adversarial network capable to reproduce the output of Fišer et al.'s patch-based method that is slow to compute but can deliver state-of-the-art visual quality. Since the resulting end-to-end network can be evaluated quickly on current consumer GPUs, our solution enables first real-time high-quality style transfer to facial videos that runs at interactive frame rates. Moreover, in cases when the original algorithmic approach of Fišer et al. fails our network can provide a more visually pleasing result thanks to generalization. We demonstrate the practical utility of our approach on a variety of different styles and target subjects.

StyleBlit: Fast Example-Based Stylization with Local Guidance

  • Autoři: prof. Ing. Daniel Sýkora, Ph.D., Jamriška, O., Texler, O., Fišer, J., Lukáč, M., Lu, J., Shechtman, E.
  • Publikace: COMPUTER GRAPHICS FORUM. 2019, 38(2), 83-91. ISSN 0167-7055.
  • Rok: 2019
  • DOI: 10.1111/cgf.13621
  • Odkaz: https://doi.org/10.1111/cgf.13621
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present StyleBlit—an efficient example-based style transfer algorithm that can deliver high-quality stylized renderings in real-time on a single-core CPU. Our technique is especially suitable for style transfer applications that use local guidance - descriptive guiding channels containing large spatial variations. Local guidance encourages transfer of content from the source exemplar to the target image in a semantically meaningful way. Typical local guidance includes, e.g., normal values, texture coordinates or a displacement field. Contrary to previous style transfer techniques, our approach does not involve any computationally expensive optimization. We demonstrate that when local guidance is used, optimization-based techniques converge to solutions that can be well approximated by simple pixel-level operations. Inspired by this observation, we designed an algorithm that produces results visually similar to, if not better than, the state-of-the-art, and is several orders of magnitude faster. Our approach is suitable for scenarios with low computational budget such as games and mobile applications.

Stylizing Video by Example

  • Autoři: Jamriška, O., Sochorová, Š., Texler, O., Lukáč, M., Fišer, J., Lu, J., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2019, 38(4), ISSN 0730-0301.
  • Rok: 2019
  • DOI: 10.1145/3306346.3323006
  • Odkaz: https://doi.org/10.1145/3306346.3323006
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be applied to any type of video content and does not require any additional information besides the video itself and a user-speci ed mask of the region to be stylized. We further show a temporal blending approach for interpolating style between keyframes that preserves texture coherence, contrast and high frequency details. We evaluate our method on various scenes from real production setting and provide a thorough comparison with prior art.

Automated Outdoor Depth-Map Generation and Alignment

  • DOI: 10.1016/j.cag.2018.05.001
  • Odkaz: https://doi.org/10.1016/j.cag.2018.05.001
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Image enhancement tasks can highly benefit from depth information, but the direct estimation of outdoor depth maps is difficult due to vast object distances. This paper presents a fully automatic framework for model-based generation of outdoor depth maps and its applications to image enhancements. We leverage 3D terrain models and camera pose estimation techniques to render approximate depth maps without resorting to manual alignment. Potential local misalignments, resulting from insufficient model details and rough registrations, are eliminated with our novel free-form warping. We first align synthetic depth edges with photo edges using the as-rigid-as-possible image registration and further refine the shape of the edges using the tight trimap-based alpha matting. The resulting synthetic depth maps are accurate, calibrated in the absolute distance. We demonstrate their benefit in image enhancement techniques including reblurring, depth-of-field simulation, haze removal, and guided texture synthesis.

FTP-SC: Fuzzy Topology Preserving Stroke Correspondence

  • Autoři: Yang, W., Seah, H.-S., Chen, Q., Liew, H.-Z., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Computer Graphics Forum. 2018, 37(8), 125-135. ISSN 0167-7055.
  • Rok: 2018
  • DOI: 10.1111/cgf.13518
  • Odkaz: https://doi.org/10.1111/cgf.13518
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Stroke correspondence construction is a precondition for vectorized 2D animation inbetweening and remains a challenging problem. This paper introduces the FTP-SC, a fuzzy topology preserving stroke correspondence technique, which is accurate and provides the user more effective control on the correspondence result than previous matching approaches. The method employs a two-stage scheme to progressively establish the stroke correspondence construction between the keyframes. In the first stage, the stroke correspondences with high confidence are constructed by enforcing the preservation of the so-called "fuzzy topology" which encodes intrinsic connectivity among the neighboring strokes. Starting with the high-confidence correspondences, the second stage performs a greedy matching algorithm to generate a full correspondence between the strokes. Experimental results show that the FTP-SC outperforms the existing approaches and can establish the stroke correspondence with a reasonable amount of user interaction even for keyframes with large geometric and spatial variations between strokes.

Seamless Reconstruction of Part-Based High-Relief Models from Hand-Drawn Images

  • Autoři: Dvorožňák, M., Nejad, S., Jamriška, O., Jacobson, A., Kavan, L., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Proceedings of the Joint Symposium on Computational Aesthetics and Sketch-Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering. New York: ACM, 2018. ISBN 978-1-4503-5892-7.
  • Rok: 2018
  • DOI: 10.1145/3229147.3229153
  • Odkaz: https://doi.org/10.1145/3229147.3229153
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new approach to reconstruction of high-relief surface models from hand-made drawings. Our method is tailored to an interactive modeling scenario where the input drawing can be separated into a set of semantically meaningful parts of which relative depth order is known beforehand. For this kind of input, our technique allows inflating individual components to have a semi- elliptical profile, positioning them to satisfy prescribed depth order, and providing their seamless interconnection. Compared to previous methods, our approach is the first that formulates this reconstruction process as a single non-linear optimization problem. Because its direct optimization is computationally challenging, we propose an approximate solution which delivers comparable results orders of magnitude faster enabling an interactive user workflow. We evaluate our approach on various hand-made drawings and demonstrate that it provides state-of-the-art quality in comparison with previous methods which require comparable user intervention.

ToonSynth: Example-Based Synthesis of Hand-Colored Cartoon Animations

  • Autoři: Dvorožňák, M., Li, W., Kim, V., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2018, 37(4), ISSN 0730-0301.
  • Rok: 2018
  • DOI: 10.1145/3197517.3201326
  • Odkaz: https://doi.org/10.1145/3197517.3201326
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new example-based approach for synthesizing hand-colored cartoon animations. Our method produces results that preserve the specific visual appearance and stylized motion of manually authored animations without requiring artists to draw every frame from scratch. In our framework, the artist first stylizes a limited set of known source skeletal animations from which we extract a style-aware puppet that encodes the appearance and motion characteristics of the artwork. Given a new target skeletal motion, our method automatically transfers the style from the source examples to create a hand-colored target animation. Compared to previous work, our technique is the first to preserve both the detailed visual appearance and stylized motion of the original hand-drawn content. Our approach has numerous practical applications including traditional animation production and content creation for games.

Example-Based Expressive Animation of 2D Rigid Bodies

  • Autoři: Dvorožňák, M., Bénard, P., Barla, P., Wang, O., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2017, 36(4), ISSN 0730-0301.
  • Rok: 2017
  • DOI: 10.1145/3072959.3073611
  • Odkaz: https://doi.org/10.1145/3072959.3073611
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a novel approach to facilitate the creation of stylized 2D rigid body animations. Our approach can handle multiple rigid objects following complex physically-simulated trajectories with collisions, while retaining a unique artistic style directly specified by the user. Starting with an existing target animation (e.g., produced by a physical simulation engine) an artist interactively draws over a sparse set of frames, and the desired appearance and motion stylization is automatically propagated to the rest of the sequence. The stylization process may also be performed in an off-line batch process from a small set of drawn sequences. To achieve these goals, we combine parametric deformation synthesis that generalizes and reuses hand-drawn exemplars, with non-parametric techniques that enhance the hand-drawn appearance of the synthesized sequence. We demonstrate the potential of our method on various complex rigid body animations which are created with an expressive hand-drawn look using notably less manual interventions as compared to traditional techniques.

Example-Based Synthesis of Stylized Facial Animations

  • Autoři: Fišer, J., Jamriška, O., Simons, D., Shechtman, E., Lu, J., Asente, P., Lukáč, M., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2017, 36(4), ISSN 0730-0301.
  • Rok: 2017
  • DOI: 10.1145/3072959.3073660
  • Odkaz: https://doi.org/10.1145/3072959.3073660
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We introduce a novel approach to example-based stylization of portrait videos that preserves both the subject's identity and the visual richness of the input style exemplar. Unlike the current state-of-the-art based on neural style transfer [Selim et al. 2016], our method performs non-parametric texture synthesis that retains more of the local textural details of the artistic exemplar and does not suffer from image warping artifacts caused by aligning the style exemplar with the target face. Our method allows the creation of videos with less than full temporal coherence [Ruder et al. 2016]. By introducing a controllable amount of temporal dynamics, it more closely approximates the appearance of real hand-painted animation in which every frame was created independently. We demonstrate the practical utility of the proposed solution on a variety of style exemplars and target videos.

Nautilus: Recovering Regional Symmetry Transformations for Image Editing

  • Autoři: Lukáč, M., prof. Ing. Daniel Sýkora, Ph.D., Sunkavalli, K., Shechtman, E., Jamriška, O., Carr, N., Pajdla, T.
  • Publikace: ACM Transactions on Graphics (TOG). 2017, 36(4), ISSN 0730-0301.
  • Rok: 2017
  • DOI: 10.1145/3072959.3073661
  • Odkaz: https://doi.org/10.1145/3072959.3073661
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Natural images often exhibit symmetries that should be taken into account when editing them. In this paper we present Nautilus — a method for automatically identifying symmetric regions in an image along with their corresponding symmetry transformations. We compute dense local similarity symmetry transformations using a novel variant of the Generalised PatchMatch algorithm that uses Metropolis-Hastings sampling. We combine and refine these local symmetries using an extended Lucas-Kanade algorithm to compute regional transformations and their spatial extents. Our approach produces dense estimates of complex symmetries that are combinations of translation, rotation, scale, and reflection under perspective distortion. This enables a number of automatic symmetry-aware image editing applications including inpainting, rectification, beautification, and segmentation, and we demonstrate state-of-the-art applications for each of them.

Syntéza obrazu založená na předloze

  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Problém syntézy obrazu založené na předloze patří mezi nové výzkumné směry oboru počítačové grafiky. Jejím cílem je vygenerovat obraz, který respektuje novou, uživatelem definovanou strukturu, ale v detailech je k nerozeznání od originální předlohy.

Advanced drawing beautification with ShipShape

  • DOI: 10.1016/j.cag.2016.02.003
  • Odkaz: https://doi.org/10.1016/j.cag.2016.02.003
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Sketching is one of the simplest ways to visualize ideas. Its key advantage is its easy availability and accessibility, as it require the user to have neither deep knowledge of a particular drawing program nor any advanced drawing skills. In practice, however, all these skills become necessary to improve the visual fidelity of the resulting drawing. In this paper, we present ShipShape—a general beautification assistant that allows users to maintain the simplicity and speed of freehand sketching while still taking into account implicit geometric relations to automatically rectify the output image. In contrast to previous approaches ShipShape works with general Bézier curves, enables undo/redo operations, is scale independent, and is fully integrated into Adobe Illustrator. We show various results to demonstrate the capabilities of the proposed method.

Algoritmus StyLit

  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Výtvarné umění je často chápáno jako jeden z vůbec nejúchvatnějších projevů lidského ducha. Stylizovaný pohled na reálný svět vyzdvihující náladu zachyceného okamžiku hry světel a stínů fascinuje lidské bytosti od nepaměti. Tento specifický projev byl až donedávna výhradně v moci lidských rukou. S nástupem výpočetní techniky ale vyvstala otázka, zda existuje šance, že by stroj někdy zvládl do určité míry toto fascinující lidské úsilí napodobit.

StyLit: Illumination-Guided Example-Based Stylization of 3D Renderings

  • Autoři: Fišer, J., Jamriška, O., Lukáč, M., Shechtman, E., Asente, P., Lu, J., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2016, 35(4), ISSN 0730-0301.
  • Rok: 2016
  • DOI: 10.1145/2897824.2925948
  • Odkaz: https://doi.org/10.1145/2897824.2925948
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present an approach to example-based stylization of 3D renderings that better preserves the rich expressiveness of hand-created artwork. Unlike previous techniques, which are mainly guided by colors and normals, our approach is based on light propagation in the scene. This novel type of guidance can distinguish among context-dependent illumination effects, for which artists typically use different stylization techniques, and delivers a look closer to realistic artwork. In addition, we demonstrate that the current state of the art in guided texture synthesis produces artifacts that can significantly decrease the fidelity of the synthesized imagery, and propose an improved algorithm that alleviates them. Finally, we demonstrate our method's effectiveness on a variety of scenes and styles, in applications like interactive shading study or autocompletion.

Brushables: Example-based Edge-aware Directional Texture Painting

  • Autoři: Lukáč, M., Fišer, J., Asente, P., Lu, J., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Computer Graphics Forum. 2015, 34(7), 257-267. ISSN 0167-7055.
  • Rok: 2015
  • DOI: 10.1111/cgf.12764
  • Odkaz: https://doi.org/10.1111/cgf.12764
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    In this paper we present Brushables---a novel approach to example-based painting that respects user-specified shapes at the global level and preserves textural details of the source image at the local level. We formulate the synthesis as a joint optimization problem that simultaneously synthesizes the interior and the boundaries of the region, transferring relevant content from the source to meaningful locations in the target. We also provide an intuitive interface to control both local and global direction of textural details in the synthesized image. A key advantage of our approach is that it enables a "combing" metaphor in which the user can incrementally modify the target direction field to achieve the desired look. Based on this, we implement an interactive texture painting tool capable of handling more complex textures than ever before, and demonstrate its versatility on difficult inputs including vegetation, textiles, hair and painting media.

Decomposing Time-lapse Paintings into Layers

  • Autoři: Tan, J., Dvorožňák, M., prof. Ing. Daniel Sýkora, Ph.D., Gingold, Y.
  • Publikace: ACM Transactions on Graphics (TOG). 2015, 34(4), ISSN 0730-0301.
  • Rok: 2015
  • DOI: 10.1145/2766960
  • Odkaz: https://doi.org/10.1145/2766960
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    The creation of a painting, in the physical world or digitally, is a process that occurs over time. Later strokes cover earlier strokes, and strokes painted at a similar time are likely to be part of the same object. In the final painting, this temporal history is lost, and a static arrangement of color is all that remains. The rich literature for interacting with image editing history cannot be used. To enable these interactions, we present a set of techniques to decompose a time lapse video of a painting (defined generally to include pencils, markers, etc.) into a sequence of translucent "stroke" images. We present translucency-maximizing solutions for recovering physical (Kubelka and Munk layering) or digital (Porter and Duff "over" blending operation) paint parameters from before/after image pairs. We also present a pipeline for processing real-world videos of paintings capable of handling long-term occlusions, such as the painter's hand and its shadow, color shifts, and noise.

LazyFluids: Appearance Transfer for Fluid Animations

  • Autoři: Jamriška, O., Fišer, J., Asente, P., Lu, J., Shechtman, E., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2015, 34(4), ISSN 0730-0301.
  • Rok: 2015
  • DOI: 10.1145/2766983
  • Odkaz: https://doi.org/10.1145/2766983
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    In this paper we present a novel approach to appearance transfer for fluid animations based on flow-guided texture synthesis. In contrast to common practice where pre-captured sets of fluid elements are combined in order to achieve desired motion and look, we bring the possibility of fine-tuning motion properties in advance using CG techniques, and then transferring the desired look from a selected appearance exemplar. We demonstrate that such a practical workflow cannot be simply implemented using current state-of-the-art techniques, analyze what the main obstacles are, and propose a solution to resolve them. In addition, we extend the algorithm to allow for synthesis with rich boundary effects and video exemplars. Finally, we present numerous results that demonstrate the versatility of the proposed approach.

Renesance kresleného filmu

  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Až donedávna existovaly dvě samostatné techniky pro tvorbu animovaných filmů. První byla ručně kreslená animace (z tisíců rozkreslených 2D obrázků), druhou animace 3D digitálních modelů vytvořených v počítači. Kombinace těchto metod se zdála nemožná, ale díky pokročilým nástrojům počítačové grafiky to už neplatí. Techniku digitální 3D animace lze spojit s volností ruční kresby.

ShipShape: A Drawing Beautification Assistant

  • Autoři: Fišer, J., Asente, P., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: Proceedings of the International Symposium on Sketch-Based Interfaces and Modeling. Aire-la-Ville: Eurographics Association, 2015. p. 49-58. ISBN 978-3-905674-90-3.
  • Rok: 2015
  • DOI: 10.2312/exp.20151178
  • Odkaz: https://doi.org/10.2312/exp.20151178
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Sketching is one of the simplest ways to visualize ideas. Its key advantage is requiring the user to have neither deep knowledge of a particular drawing software nor any advanced drawing skills. In practice, however, all these skills become necessary to improve the visual fidelity of the resulting drawing. In this paper, we present ShipShape-a general beautification assistant that allows users to maintain the simplicity and speed of freehand sketching while still taking into account implicit geometric relations to automatically rectify the output image. In contrast to previous approaches ShipShape works with general Bézier curves, enables undo/redo operations, is scale independent, and is fully integrated into Adobe Illustrator. We demonstrate various results to demonstrate capabilities of the proposed method.

Color Me Noisy: Example-based Rendering of Hand-colored Animations with Temporal Noise Control

  • Autoři: Fišer, J., Lukáč, M., Jamriška, O., Čadík, M., Gingold, Y., Asente, P., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: COMPUTER GRAPHICS FORUM. 2014, 33(4), 1-10. ISSN 0167-7055.
  • Rok: 2014
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present an example-based approach to rendering hand-colored animations which delivers visual richness comparable to real artwork while enabling control over the amount of perceived temporal noise. This is important both for artistic purposes and viewing comfort, but is tedious or even intractable to achieve manually. We analyse typical features of real hand-colored animations and propose an algorithm that tries to mimic them using only static examples of drawing media. We apply the algorithm to various animations using different drawing media and compare the quality of synthetic results with real artwork. To verify our method perceptually, we conducted experiments confirming that our method delivers distinguishable noise levels and reduces eye strain. Finally, we demonstrate the capabilities of our method to mask imperfections such as shower-door artifacts.

Ink-and-Ray: Bas-Relief Meshes for Adding Global Illumination Effects to Hand-Drawn Characters

  • Autoři: prof. Ing. Daniel Sýkora, Ph.D., Kavan, L., Čadík, M., Jamriška, O., Jacobson, A., Whited, B., Maryann, S., Sorkine-Hornung, O.
  • Publikace: ACM Transactions on Graphics (TOG). 2014, 33(2), ISSN 0730-0301.
  • Rok: 2014
  • DOI: 10.1145/2591011
  • Odkaz: https://doi.org/10.1145/2591011
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a new approach for generating global illumination renderings of hand-drawn characters using only a small set of simple annotations. Our system exploits the concept of bas-relief sculptures, making it possible to generate 3D proxies suitable for rendering without requiring side-views or extensive user input. We formulate an optimization process that automatically constructs approximate geometry sufficient to evoke the impression of a consistent 3D shape. The resulting renders provide the richer stylization capabilities of 3D global illumination while still retaining the 2D hand-drawn look-and-feel. We demonstrate our approach on a varied set of hand-drawn images and animations, showing that even in comparison to ground-truth renderings of full 3D objects, our bas-relief approximation is able to produce convincing global illumination effects, including self-shadowing, glossy reflections, and diffuse color bleeding.

Computer-Assisted Repurposing of Existing Animations

  • Autoři: prof. Ing. Daniel Sýkora, Ph.D., Dingliana, J.
  • Publikace: Image and Video-based Artistic Stylisation. 2 ed. London: Springer, 2013. p. 285-308. Computational Imaging and Vision. vol. 42. ISBN 978-1-4471-4518-9.
  • Rok: 2013
  • DOI: 10.1007/978-1-4471-4519-6_14
  • Odkaz: https://doi.org/10.1007/978-1-4471-4519-6_14
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    The aim of this chapter is to present a set of tools that enable ease of modification, manipulation, and rendering similar to 3D animation systems, whilst preserving the expressivity and simplicity of the original hand-drawn animation. To achieve this, it is necessary to infer a part of the structural information hidden in the sequence of hand-drawn images, namely the partitioning into meaningful segments, their topology variations, depth ordering, and correspondences. Since this inference can be very ambiguous and cannot be fully automated, we let the artist provide a couple of rough hints that make this problem tractable.

Painting by Feature: Texture Boundaries for Example-based Image Creation

  • Autoři: Lukáč, M., Fišer, J., Bazin, J.-C., Jamriška, O., Sorkine-Hornung, A., prof. Ing. Daniel Sýkora, Ph.D.,
  • Publikace: ACM Transactions on Graphics (TOG). 2013, 32(4), ISSN 0730-0301.
  • Rok: 2013
  • DOI: 10.1145/2461912.2461956
  • Odkaz: https://doi.org/10.1145/2461912.2461956
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    In this paper we propose a reinterpretation of the brush and the fill tools for digital image painting. The core idea is to provide an intuitive approach that allows users to paint in the visual style of arbitrary example images. Rather than a static library of colors, brushes, or fill patterns, we offer users entire images as their palette, from which they can select arbitrary contours or textures as their brush or fill tool in their own creations. Compared to previous example-based techniques related to the painting-by-numbers paradigm we propose a new strategy where users can generate salient texture boundaries by our randomized graph-traversal algorithm and apply a content-aware fill to transfer textures into the delimited regions. This workflow allows users of our system to intuitively create visually appealing images that better preserve the visual richness and fluidity of arbitrary example images. We demonstrate the potential of our approach in various applications including interactive image creation, editing and vector image stylization.

Cache-efficient graph cuts on structured grids

  • Autoři: Jamriška, O., prof. Ing. Daniel Sýkora, Ph.D., Hornung, A.
  • Publikace: Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. New York: IEEE Press, 2012. p. 3673-3680. ISSN 1063-6919. ISBN 978-1-4673-1228-8.
  • Rok: 2012
  • DOI: 10.1109/CVPR.2012.6248113
  • Odkaz: https://doi.org/10.1109/CVPR.2012.6248113
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    Finding minimal cuts on graphs with a grid-like structure has become a core task for solving many computer vision and graphics related problems. However, computation speed and memory consumption oftentimes limit the effective use in applications requiring high resolution grids or interactive response. In particular, memory bandwidth represents one of the major bottlenecks even in today's most efficient implementations. We propose a compact data structure with cache-efficient memory layout for the representation of graph instances that are based on regular N-D grids with topologically identical neighborhood systems. For this common class of graphs our data structure allows for 3 to 12 times higher grid resolutions and a 3- to 9-fold speedup compared to existing approaches. Our design is agnostic to the underlying algorithm, and hence orthogonal to other optimizations such as parallel and hierarchical processing. We evaluate the performance gain on a variety of typical problems including 2D/3D segmentation, colorization, and stereo. All experiments show an unconditional improvement in terms of speed and memory consumption, with graceful performance degradation for graphs with increasing topological irregularities.

Smart Scribbles for Sketch Segmentation

  • Autoři: Noris, G., prof. Ing. Daniel Sýkora, Ph.D., Shamir, A., Coros, S., Whited, B., Simmons, M., Hornung, A., Gross, M., Sumner, R.
  • Publikace: Computer Graphics Forum. 2012, 31(8), 2516-2527. ISSN 0167-7055.
  • Rok: 2012
  • DOI: 10.1111/j.1467-8659.2012.03224.x
  • Odkaz: https://doi.org/10.1111/j.1467-8659.2012.03224.x
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present ‘Smart Scribbles’—a new scribble-based interface for user-guided segmentation of digital sketchy drawings. In contrast to previous approaches based on simple selection strategies, Smart Scribbles exploits richer geometric and temporal information, resulting in a more intuitive segmentation interface. We introduce a novel energy minimization formulation in which both geometric and temporal information from digital input devices is used to define stroke-to-stroke and scribble-to-stroke relationships. Although the minimization of this energy is, in general, an NP-hard problem, we use a simple heuristic that leads to a good approximation and permits an interactive system able to produce accurate labellings even for cluttered sketchy drawings. We demonstrate the power of our technique in several practical scenarios such as sketch editing, as-rigid-as-possible deformation and registration, and on-the-fly labelling based on pre-classified guidelines.

Temporal Noise Control for Sketchy Animation

  • Autoři: Noris, G., prof. Ing. Daniel Sýkora, Ph.D., Coros, S., Whited, B., Simmons, M., Hornung, A., Gross, M., Sumner, R.
  • Publikace: Proceedings of International Symposium on Non-Photorealistic Animation and Rendering. New York: ACM SIGGRAPH, 2011. p. 93-98. ISBN 978-1-4503-0907-3.
  • Rok: 2011
  • DOI: 10.1145/2024676.2024691
  • Odkaz: https://doi.org/10.1145/2024676.2024691
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We propose a technique to control the temporal noise present in sketchy animations. Given an input animation drawn digitally, our approach works by combining motion extraction and inbetweening techniques to generate a reduced-noise sketchy animation registered to the input animation. The amount of noise is then controlled by a continuous parameter value. Our method can be applied to effectively reduce the temporal noise present in sequences of sketches to a desired rate, while preserving the geometric richness of the sketchy style in each frame. This provides the manipulation of temporal noise as an additional artistic parameter to emphasize character emotions and scene atmosphere, and enables the display of sketchy content to broader audiences by producing animations with comfortable noise levels. We demonstrate the effectiveness of our approach on a series of rough hand-drawn animations.

TexToons: Practical Texture Mapping for Hand-drawn Cartoon Animations

  • Autoři: prof. Ing. Daniel Sýkora, Ph.D., Ben-Chen, M., Čadík, M., Whited, B., Simmons, M.
  • Publikace: Proceedings of International Symposium on Non-Photorealistic Animation and Rendering. New York: ACM SIGGRAPH, 2011. p. 75-83. ISBN 978-1-4503-0907-3.
  • Rok: 2011
  • DOI: 10.1145/2024676.2024689
  • Odkaz: https://doi.org/10.1145/2024676.2024689
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    We present a novel and practical texture mapping algorithm for hand-drawn cartoons that allows the production of visually rich animations with minimal user effort. Unlike previous techniques, our approach works entirely in the 2D domain and does not require the knowledge or creation of a 3D proxy model. Inspired by the fact that the human visual system tends to focus on the most salient features of a scene, which we observe for hand-drawn cartoons are the contours rather than the interior of regions, we can create the illusion of temporally coherent animation using only rough 2D image registration. This key observation allows us to design a simple yet effective algorithm that significantly reduces the amount of manual labor required to add visually complex detail to an animation, thus enabling efficient cartoon texturing for computer-assisted animation production pipelines. We demonstrate our technique on a variety of input animations as well as provide examples of post-processing oper

Adding Depth to Cartoons Using Sparse Depth (In)equalities

  • DOI: 10.1111/j.1467-8659.2009.01631.x
  • Odkaz: https://doi.org/10.1111/j.1467-8659.2009.01631.x
  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    This paper presents a novel interactive approach for adding depth information into hand-drawn cartoon images and animations. In comparison to previous depth assignment techniques our solution requires minimal user effort and enables creation of consistent pop-ups in a matter of seconds. Inspired by perceptual studies we formulate a custom tailored optimization framework that tries to mimic the way that a human reconstructs depth information from a single image. Its key advantage is that it completely avoids inputs requiring knowledge of absolute depth and instead uses a set of sparse depth (in)equalities that are much easier to specify. Since these constraints lead to a solution based on quadratic programming that is time consuming to evaluate we propose a simple approximative algorithm yielding similar results with much lower computational overhead. We demonstrate its usefulness in the context of a cartoon animation production pipeline.

iCheat: A Representation for Artistic Control of Indirect Cinematic Lighting

  • Autoři: Obert, J., Křivánek, J., Pellacini, F., prof. Ing. Daniel Sýkora, Ph.D., Pattanaik, S.
  • Publikace: Computer Graphics Forum. 2008, 27(4), 1217-1224. ISSN 0167-7055.
  • Rok: 2008
  • Pracoviště: Katedra počítačů
  • Anotace:
    Thanks to an increase in rendering efficiency, indirect illumination has recently began to be integrated in cinematic lighting design, an application where physical accuracy is less important than careful control of scene appearance. This paper presents a comprehensive, efficient, and intuitive representation for artistic control of indirect illumination. We encode user's adjustments to indirect lighting as scale and offset coefficients of the transfer operator. We take advantage of the nature of indirect illumination and of the edits themselves to efficiently sample and compress them. A major benefit of this sampled representation, compared to encoding adjustments as procedural shaders, is the renderer-independence. This allowed us to easily implement several tools to produce our final images: an interactive relighting engine to view adjustments, a painting interface to define them, and a final renderer to render high quality results.

Real-time Color Ball Tracking for Augmented Reality

  • Pracoviště: Katedra počítačové grafiky a interakce
  • Anotace:
    In this paper, we introduce a light-weight and robust tracking technique based on color balls. The algorithm builds on a variant of randomized Hough transform and is optimized for a use in real-time applications like on low-cost Augmented Reality (AR) systems. With just one conventional color camera our approach provides the ability to determine the 3D position of several color balls at interactive frame rates on a common PC workstation. It is fast enough to be easily combined with another real-time tracking engine. In contrast to popular tracking techniques based on recognition of planar fiducial markers it offers robustness to partial occlusion, which eases handling and manipulation. Furthermore, while using balls as markers a proper haptic feedback and visual metaphor is provided. The exemplary use of our technique in the context of two AR applications indicates the effectiveness of the proposed approach.

Interactive light transport editing for flexible global illumination

  • Autoři: Obert, J., Křivánek, J., prof. Ing. Daniel Sýkora, Ph.D., Pattanaik, S.
  • Publikace: International Conference on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2007 sketches. New York: ACM, 2007. pp. 57.
  • Rok: 2007
  • DOI: 10.1145/1278780.1278849
  • Odkaz: https://doi.org/10.1145/1278780.1278849
  • Pracoviště: Katedra počítačů
  • Anotace:
    The limited flexibility imposed by the physics of light transport has been a major hurdle in the use of global illumination in production rendering. It has been a common practice in computer cinematography to write custom shaders [Christensen 2003] or manually edit rendered images [Thacker 2006] to achieve lighting effects desired by directors. This process is lengthy, cumbersome and too technical for artists. To remove this hurdle, we propose a novel interface that allows users to modify light transport in a scene. Users can adjust effects of indirect lighting cast by one object onto another or paint indirect illumination on surfaces. As opposed to previous approaches such as [Schoeneman et al. 1993] our work departs from solving optimization problems in favor of more user control.

Colorization of Black-and-White Cartoons

  • DOI: 10.1016/j.imavis.2005.05.010
  • Odkaz: https://doi.org/10.1016/j.imavis.2005.05.010
  • Pracoviště: Katedra počítačů, Katedra počítačové grafiky a interakce
  • Anotace:
    We introduce a novel colorization framework for old black-and-white cartoons which has been originally produced by a cel or paper based technology. In this case the dynamic part of the scene is represented by a set of outlined homogeneous regions that superimpose static background. To reduce a large amount of manual intervention we combine unsupervised image segmentation, background reconstruction and structural prediction. Our system in addition allows the user to specify the brightness of applied colors unlike the most of previous approaches which operate only with hue and saturation. We also present a simple but effective color modulation, composition and dust spot removal techniques able produce color images in broadcast quality without additional user intervention.

Sketching Cartoons by Example

  • Pracoviště: Katedra počítačů, Katedra počítačové grafiky a interakce
  • Anotace:
    We introduce a novel example-based framework for reusing traditional cartoon drawings and animations. In contrast to previous approaches our aim is to design new characters and poses by combining fragments of the original artwork. Using standard image manipulation tool this task is tedious and time consuming. To reduce the amount of manual intervention we combine unsupervised image segmentation, fragment extraction and high-quality vectorization. The user can simply select an interesting part in the original image and then adjust it in a new composition using a few control scribbles. Thanks to ease of manipulation proposed sketch-based interface is suitable both for experienced artists and unskilled users (e.g. children) who wish to create new stories in the style of masters. Practical results confirm that using our framework high-quality cartoon drawings can be produced within much shorter time frames as compared with standard approaches.

Video Codec for Classical Cartoon Animations with Hardware Accelerated Playback

  • DOI: 10.1007/11595755_6
  • Odkaz: https://doi.org/10.1007/11595755_6
  • Pracoviště: Katedra počítačů, Katedra počítačové grafiky a interakce
  • Anotace:
    We introduce a novel approach to video compression which is suitable for traditional outline-based cartoon animations. In this case the dynamic foreground consists of several homogeneous regions and the background is static textural image. For this drawing style we show how to recover hybrid representation where the background is stored as a single bitmap and the foreground as a sequence of vector images. This allows us to preserve compelling visual quality as well as spatial scalability even for low encoding bit-rates. We also introduce an efficient approach to play back compressed animations in real-time on commodity graphics hardware. Practical results confirm that for the same storage requirements our framework provides better visual quality as compared to standard video compression techniques.

Color for Black-and-White Cartoons

  • Pracoviště: Katedra počítačů
  • Anotace:
    In this paper we discuss a challenging problem of applying novel color information into the old black-and-white cartoons. Thanks to a possibility, which allows us to convert the original analogue material to the sequence of digital images, we are able to solve this task using methods of digital image-processing.

Jak Rumcajs k barvám příšel

  • Pracoviště: Katedra počítačů
  • Anotace:
    Snad každý zná původní TV večerníček Radka Pilaře "O loupežníku Rumcajsovi". Málokdo však ví, že tento seriál je již tak starý, že jeho první série byla vytvořena a vysílána černobíle. I přesto byla tato pohádková výtvarná lahůdka odvysílána během března letošního roku formou večerníčku na ČT, ale pozor -barevně! Tento jev zůstal i přes snahu úvodního komentáře hlasatelky běžnému divákovi nevysvětlen. Dětský divák ani nepostřehl, že právě shlédl výsledek práce, při níž byly použity nejmodernější digitální technologie, které si v principu nezadají s technologiemi posledních výtvorů studií jako je Disney, Pixar či DreamWorks.

Unsupervised Colorization of Black-and-White Cartoons

  • DOI: 10.1145/987657.987677
  • Odkaz: https://doi.org/10.1145/987657.987677
  • Pracoviště: Katedra počítačů
  • Anotace:
    We present a novel color-by-example technique which combines image segmentation, patch-based sampling and probabilistic reasoning. This method is able to automate colorization when new color information is applied on the already designed black-and white cartoon. Our technique is especially suitable for cartoons digitized from classical celluloid films, which were originally produced by a paper or cel based method. We assume that objects in the foreground layer consist of several well visible outlines which will emphasize the shape of homogeneous regions.

Segmentation of Black and White Cartoons

Za stránku zodpovídá: Ing. Mgr. Radovan Suk