Lidé

Ing. Zdeněk Rozsypálek

Všechny publikace

Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation

  • DOI: 10.3390/s22082975
  • Odkaz: https://doi.org/10.3390/s22082975
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.

Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation

  • DOI: 10.3390/s22082836
  • Odkaz: https://doi.org/10.3390/s22082836
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.

Semi-supervised Learning for Image Alignment in Teach and Repeat Navigation

  • DOI: 10.1145/3477314.3507045
  • Odkaz: https://doi.org/10.1145/3477314.3507045
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual teach and repeat navigation (VT&R) is a framework that enables mobile robots to traverse previously learned paths. In principle, it relies on computer vision techniques that can compare the camera's current view to a model based on the images captured during the teaching phase. However, these techniques are usually not robust enough when significant changes occur in the environment between the teach and repeat phases. In this paper, we show that contrastive learning methods can learn how the environment changes and improve the robustness of a VT&R framework. We apply a fully convolutional Siamese network to register the images of the teaching and repeat phases. Their horizontal displacement between the images is then used in a visual servoing manner to keep the robot on the intended trajectory. The experiments performed on several datasets containing seasonal variations indicate that our method outperforms state-of-the-art algorithms tailored to the purpose of registering images captured in different seasons.

Mobile Manipulator for Autonomous Localization, Grasping and Precise Placement of Construction Material in a Semi-structured Environment

  • DOI: 10.1109/LRA.2021.3061377
  • Odkaz: https://doi.org/10.1109/LRA.2021.3061377
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    Mobile manipulators have the potential to revolutionize modern agriculture, logistics and manufacturing. In this work, we present the design of a ground-based mobile manipulator for automated structure assembly. The proposed system is capable of autonomous localization, grasping, transportation and deployment of construction material in a semi-structured environment. Special effort was put into making the system invariant to lighting changes, and not reliant on external positioning systems. Therefore, the presented system is self-contained and capable of operating in outdoor and indoor conditions alike. Finally, we present means to extend the perceptive radius of the vehicle by using it in cooperation with an autonomous drone, which provides aerial reconnaissance. Performance of the proposed system has been evaluated in a series of experiments conducted in real-world conditions.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk