Všechny publikace

Real Time Fiducial Marker Localisation System with Full 6 DOF Pose Estimation

  • DOI: 10.1145/3594264.3594266
  • Odkaz: https://doi.org/10.1145/3594264.3594266
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The ability to reliably determine its own position, as well as the position of surrounding objects, is crucial for any autonomous robot. While this can be achieved with a certain degree of reliability, augmenting the environment with artificial markers that make these tasks easier is often practical. This applies especially to the evaluation of robotic experiments, which often require exact ground truth data containing the positions of the robots. This paper proposes a new method for estimating the position and orientation of circular fiducial markers in 3D space. Simulated and real experiments show that our method achieved three times lower localisation error than the method it derived from. The experiments also indicate that our method outperforms state-of-the-art systems in terms of orientation estimation precision while maintaining similar or better accuracy in position estimation. Moreover, our method is computationally efficient, allowing it to detect and localise several markers in a fraction of the time required by the state-of-the-art fiducial markers. Furthermore, the presented method requires only an off-the-shelf camera and printed tags, can be quickly set up and works in natural light conditions outdoors. These properties make it a viable alternative to expensive high-end localisation systems.

Bootstrapped Learning for Car Detection in Planar Lidars

  • DOI: 10.1145/3477314.3507312
  • Odkaz: https://doi.org/10.1145/3477314.3507312
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    We present a proof-of-concept method for using bootstrapped learning for car detection in lidar scans using neural networks. We transfer knowledge from a traditional hand-engineered clustering and geometry-based detection technique to deep-learning-based methods. The geometry-based method automatically annotates laserscans from a vehicle travelling around a static car park over a long period of time. We use these annotations to automatically train the deep-learning neural network and evaluate and compare this method against the original geometrical method in various weather conditions. Furthermore, by using temporal filters, we can find situations where the original method was struggling or giving intermittent detections and still automatically annotate these frames and use them as part of the training process. Our evaluation indicates an increased detection accuracy and robustness as sensing conditions deteriorate compared to the method from which trained the neural network.

Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation

  • DOI: 10.3390/s22082975
  • Odkaz: https://doi.org/10.3390/s22082975
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.

Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions

  • DOI: 10.3390/s22228855
  • Odkaz: https://doi.org/10.3390/s22228855
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.

Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation

  • DOI: 10.3390/s22082836
  • Odkaz: https://doi.org/10.3390/s22082836
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.

Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation

  • DOI: 10.3389/frobt.2022.890013
  • Odkaz: https://doi.org/10.3389/frobt.2022.890013
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot's presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people's natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people's presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.

Boosting the Performance of Object Detection CNNs with Context-Based Anomaly Detection

  • Autoři: Ing. Jan Blaha, Broughton, G., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. Springer Nature, 2021. p. 159-176. 349. ISSN 1867-8211. ISBN 978-3-030-67536-3.
  • Rok: 2021
  • DOI: 10.1007/978-3-030-67537-0_11
  • Odkaz: https://doi.org/10.1007/978-3-030-67537-0_11
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we employ anomaly detection methods to enhance the ability of object detectors by using the context of their detections. This has numerous potential applications from boosting the performance of standard object detectors, to the preliminary validation of annotation quality, and even for robotic exploration and object search. We build our method on autoencoder networks for detecting anomalies, where we do not try to filter incoming data based on anomality score as is usual, but instead, we focus on the individual features of the data representing an actual scene. We show that one can teach autoencoders about the contextual relationship of objects in images, i.e. the likelihood of co-detecting classes in the same scene. This can then be used to identify detections that do and do not fit with the rest of the current observations in the scene. We show that the use of this information yields better results than using traditional thresholding when deciding if weaker detections are actually classed as observed or not. The experiments performed not only show that our method significantly improves the performance of CNN object detectors, but that it can be used as an efficient tool to discover incorrectly-annotated images

CHRONOROBOTICS: Representing the Structure of Time for Service Robots

  • DOI: 10.1145/3440084.3441195
  • Odkaz: https://doi.org/10.1145/3440084.3441195
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Chronorobotics is the investigation of scientific methods allowing robots to adapt to and learn from the perpetual changes occurring in natural and human-populated environments. We present methods that can introduce the notion of dynamics into spatial environment models, resulting in representations which provide service robots with the ability to predict future states of changing environments. Several long-term experiments indicate that the aforementioned methods gradually improve the efficiency of robots' autonomous operations over time. More importantly, the experiments indicate that chronorobotic concepts improve robots' ability to seamlessly merge into human-populated environments, which is important for their integration and acceptance in human societies

Natural Criteria for Comparison of Pedestrian Flow Forecasting Models

  • Autoři: Vintr, T., Yan, Z., Eyisoy, K., Kubiš, F., Ing. Jan Blaha, Ing. Jiří Ulrich, Swaminathan, C., Molina, S., Kucner, T.P., Magnusson, M., Cielniak, G., prof. Ing. Jan Faigl, Ph.D., Duckett, T., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Robotics and Automation Society, 2020. p. 11197-11204. ISSN 2153-0866. ISBN 978-1-7281-6212-6.
  • Rok: 2020
  • DOI: 10.1109/IROS45743.2020.9341672
  • Odkaz: https://doi.org/10.1109/IROS45743.2020.9341672
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-the-art pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk