Lidé
Ing. Tomáš Rouček, Ph.D.
Všechny publikace
Autonomous Tracking of Honey Bee Behaviors over Long-term Periods with Cooperating Robots
- Autoři: Ing. Jiří Ulrich, Stefanec, M., Rekabi-Bana, F., Fedotoff, L.A., Ing. Tomáš Rouček, Ph.D., Gündeğer, B.Y., Saadat, M., Ing. Jan Blaha, Ing. Jiří Janota, Hofstadler, N., Žampachů, K., Keyvan, E.E., Erdem, B., Sahin, E., Alemdar, H., Turgut, A.E., Arvin, F., Schmickl, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Science Robotics. 2024, 9(95), ISSN 2470-9476.
- Rok: 2024
- DOI: 10.1126/scirobotics.adn6848
- Odkaz: https://doi.org/10.1126/scirobotics.adn6848
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Digital and mechatronic methods, paired with artificial intelligence and machine learning, are game-changing technologies in behavioral science. The central element of the most important pollinator species (honeybees) is the colony’s queen. The behavioral strategies of these ecologically important organisms are under-researched, due to the complexity of honeybees’ self-regulation and the difficulties of studying queens in their natural colony context. We created an autonomous robotic observation and behavioral analysis system aimed at 24/7 observation of the queen and her interactions with worker bees and comb cells, generating unique behavioral datasets of unprecedented length and quality. Significant key performance indicators of the queen and her social embedding in the colony were gathered by this tailored but also versatile robotic system. Data collected over 24-hour and 30-day periods demonstrate our system’s capability to extract key performance indicators on different system levels: Microscopic, mesoscopic, and macroscopic data are collected in parallel. Additionally, interactions between various agents are also observed and quantified. Long-term continuous observations yield high amounts of high-quality data when performed by an autonomous robot, going significantly beyond feasibly obtainable results of human observation methods or stationary camera systems. This allows a deep understanding of the innermost mechanisms of honeybees’ swarm-intelligent self-regulation as well as studying other ocial insect colonies, biocoenoses and ecosystems in novel ways. Social insects are keystone species in all ecosystems, thus understanding them better will be valuable to monitor, interpret, protect and even to restore our fragile ecosystems globally.
Effective Searching for the Honeybee Queen in a Living Colony
- Autoři: Ing. Jan Blaha, Mikula, J., Vintr, T., Ing. Jiří Janota, Ing. Jiří Ulrich, Ing. Tomáš Rouček, Ph.D., Rekabi-Bana, F., Fedotoff, L.A., Stefanec, M., Schmickl, T., Arvin, F., Kulich, M., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE). IEEE Xplore, 2024. p. 3675-3682. ISSN 2161-8089. ISBN 979-8-3503-5851-3.
- Rok: 2024
- DOI: 10.1109/CASE59546.2024.10711366
- Odkaz: https://doi.org/10.1109/CASE59546.2024.10711366
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Despite the importance of honeybees as pollinators for the entire ecosystem and their recent decline threatening agricultural production, the dynamics of the living colony are not well understood. In our EU H2020 RoboRoyale project, we aim to support the pollination activity of the honeybees through robots interacting with the core element of the honeybee colony, the honeybee queen. In order to achieve that, we need to understand how the honeybee queen behaves and interacts with the surrounding worker bees. To gather the necessary data, we observe the queen with a moving camera, and occasionally, we instruct the system to perform selective observations elsewhere. In this paper, we deal with the problem of searching for the honeybee queen inside a living colony. We demonstrate that combining spatio-temporal models of queen presence with efficient search methods significantly decreases the time required to find her. This will minimize the chance of missing interesting data on the infrequent behaviors or queen-worker interactions, leading to a better understanding of the queen's behavior over time. Moreover, a faster search for the queen allows the robot to leave her more frequently and gather more data in other areas of the honeybee colony.
Predictive Data Acquisition for Lifelong Visual Teach, Repeat and Learn
- Autoři: Ing. Tomáš Rouček, Ph.D., Ing. Zdeněk Rozsypálek, Ing. Jan Blaha, Ing. Jiří Ulrich, doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: IEEE Robotics and Automation Letters. 2024, 9(11), 10042-10049. ISSN 2377-3766.
- Rok: 2024
- DOI: 10.1109/LRA.2024.3421193
- Odkaz: https://doi.org/10.1109/LRA.2024.3421193
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Nowadays, robots can operate in environments which are not tailored for them. This allows their deployments in changing and human-populated environments, which recent advances in machine learning methods enabled. The efficiency of these methods is largely determined by the quality of their training data. An up-to-date and well-balanced training dataset is paramount for achieving robust robot operation. To achieve long-term operation, the robot has to deal with perpetual environmental changes, forcing it to keep its models up-to-date. We present an exploration method allowing a mobile robot to gather high-quality data to update its models both while performing its duties and when idle, maximizing effectivity. The robot evaluates the quality of the data gathered in the past and based on that, it creates preferences which influence how often these locations are visited. This exploration method was integrated with a self-supervised visual teach-and-repeat pipeline. We show the precision and robustness of visual-based navigation to improve when using machine-learned models trained by our exploration method. Our research resulted in a robotic navigation system that can not only annotate its training data but also ensure that its training dataset is balanced and up-to-date. The codes, datasets, trained models and examples for our experiments can be found online for better reproducibility at .
Toward Perpetual Occlusion-Aware Observation of Comb States in Living Honeybee Colonies
- Autoři: Ing. Jan Blaha, Vintr, T., Mikula, J., Ing. Jiří Janota, Ing. Tomáš Rouček, Ph.D., Ing. Jiří Ulrich, Rekabi-Bana, F., Fedotoff, L.A., Stefanec, M., Schmickl, T., Arvin, F., Kulich, M., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024). Piscataway: IEEE, 2024. p. 5948-5955. ISSN 2153-0866. ISBN 979-8-3503-7770-5.
- Rok: 2024
- DOI: 10.1109/IROS58592.2024.10801380
- Odkaz: https://doi.org/10.1109/IROS58592.2024.10801380
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Honeybees are one of the most important pollinators in the ecosystem. Unfortunately, the dynamics of living honeybee colonies are not well understood due to their complexity and difficulty of observation. In our project 'RoboRoyale', we build and operate a robot to be a part of a bio-hybrid system, which currently observes the honeybee queen in the colony and physically tracks it with a camera. Apart from tracking and observing the queen, the system needs to monitor the state of the honeybee comb which is most of the time occluded by workerbees. This introduces a necessary tradeoff between tracking the queen and visiting the rest of the hive to create a daily map. We aim to collect the necessary data more effectively. We evaluate several mapping methods that consider the previous observations and forecasted densities of bees occluding the view. To predict the presence of bees, we use previously established maps of dynamics developed for autonomy in human-populated environments. Using data from the last observational season, we show significant improvement of the informed comb mapping methods over our current system. This will allow us to use our resources more effectively in the upcoming season.
Federated Reinforcement Learning for Collective Navigation of Robotic Swarms
- Autoři: Na, S., Ing. Tomáš Rouček, Ph.D., Ing. Jiří Ulrich, Pikman, J., doc. Ing. Tomáš Krajník, Ph.D., Lennox, B., Arvin, F.
- Publikace: IEEE Transactions on Cognitive and Developmental Systems. 2023, 15(4), 2122-2131. ISSN 2379-8920.
- Rok: 2023
- DOI: 10.1109/TCDS.2023.3239815
- Odkaz: https://doi.org/10.1109/TCDS.2023.3239815
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The recent advancement of deep reinforcement learning (DRL) contributed to robotics by allowing automatic controller design. The automatic controller design is a crucial approach for designing swarm robotic systems, which require more complex controllers than a single robot system to lead a desired collective behavior. Although the DRL-based controller design method showed its effectiveness for swarm robotic systems, the reliance on the central training server is a critical problem in real-world environments where robot-server communication is unstable or limited. We propose a novel federated learning (FL)-based DRL training strategy federated learning DDPG (FLDDPG) for use in swarm robotic applications. Through the comparison with baseline strategies under a limited communication bandwidth scenario, it is shown that the FLDDPG method resulted in higher robustness and generalization ability into a different environment and real robots, while the baseline strategies suffer from the limitation of communication bandwidth. This result suggests that the proposed method can benefit swarm robotic systems operating in environments with limited communication bandwidth, e.g., in high radiation, underwater, or subterranean environments.
Mechatronic Design for Multi Robots-Insect Swarms Interactions
- Autoři: Rekabi-Bana, F., Stefanec, M., Ing. Jiří Ulrich, Keyvan, E.E., Ing. Tomáš Rouček, Ph.D., Broughton, G., Gundeger, B.Y., Sahin, O., Turgut, A.E., Sahin, E., doc. Ing. Tomáš Krajník, Ph.D., Schmickl, T., Arvin, F.
- Publikace: Proceedings of 2023 IEEE International Conference on Mechatronics. IEEE Xplore, 2023. ISBN 978-1-6654-6661-5.
- Rok: 2023
- DOI: 10.1109/ICM54990.2023.10102026
- Odkaz: https://doi.org/10.1109/ICM54990.2023.10102026
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
This paper presents the concept of a robotic system collaborating with a swarm of social insects inside their hive. This robot consists of a micro- and macro-manipulator and a tracking system. The micro-manipulator uses bio-mimetic agents to interact with an individual specimen. The macro-manipulator positions and keeps the micro-manipulator's base around the given individual while moving in the hive. This individual is tracked by a fiducial marker-based visual detection and localisation system, which also provides positions of the bio-mimetic agents. The base of the system was experimentally verified in a honeybee observation hive, where it flawlessly tracked the honeybee queen for several hours, gathering sufficient data to extract the behaviours of honeybee workers in the queen's vicinity. These data were then used in simulation to verify if the micro-manipulator's bio-mimetic agents could mimic some of the honeybee workers' behaviours.
Multidimensional Particle Filter for Long-Term Visual Teach and Repeat in Changing Environments
- Autoři: Ing. Zdeněk Rozsypálek, Ing. Tomáš Rouček, Ph.D., Vintr, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: IEEE Robotics and Automation Letters. 2023, 8(4), 1951-1958. ISSN 2377-3766.
- Rok: 2023
- DOI: 10.1109/LRA.2023.3244418
- Odkaz: https://doi.org/10.1109/LRA.2023.3244418
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
When a mobile robot is asked to navigate intelligently in an environment, it needs to estimate its own and the environment's state. One of the popular methods for robot state and position estimation is particle filtering (PF). Visual Teach and Repeat (VT & R) is a type of navigation that uses a camera to navigate the robot along the previously traversed path. Particle filters are usually used in VT & R to fuse data from odometry and camera to estimate the distance traveled along the path. However, in VT & R, there are other valuable states that the robot can benefit from, especially when moving through changing environments. We propose a multidimensional particle filter to estimate the robot state in VT & R navigation. Apart from the traveled distance, our particle filter estimates lateral and heading deviation from the taught path as well as the current appearance of the environment. This appearance is estimated using maps created across various environmental conditions recorded during the previous traversals. The joint state estimation is based on contrastive neural network architecture, allowing self-supervised learning. This architecture can process multiple images in parallel, alleviating the potential overhead caused by computing the particle filter over the maps simultaneously. We conducted experiments to show that the joint robot/environment state estimation improves navigation accuracy and robustness in a continual mapping setup. Unlike the other frameworks, which treat the robot position and environment appearance separately, our PF represents them as one multidimensional state, resulting in a more general uncertainty model for VT & R.
Performance Comparison of Visual Teach and Repeat Systems for Mobile Robots
- Autoři: Simon, M., Broughton, G., Ing. Tomáš Rouček, Ph.D., Ing. Zdeněk Rozsypálek, doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Modelling and Simulation for Autonomous Systems (MESAS 2022). Springer, Cham, 2023. p. 3-24. LNCS. vol. 13866. ISSN 0302-9743. ISBN 978-3-031-31267-0.
- Rok: 2023
- DOI: 10.1007/978-3-031-31268-7_1
- Odkaz: https://doi.org/10.1007/978-3-031-31268-7_1
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
In practical work scenarios, it is often necessary to repeat specific tasks, which include navigating along a desired path. Visual teach and repeat systems are a type of autonomous navigation in which a robot repeats a previously taught path using a camera and dead reckoning. There have been many different teach and repeat methods proposed in the literature, but only a few are open-source. In this paper, we compare four recently published open-source methods and a Boston Dynamics proprietary solution embedded in a Spot robot. The intended use for each method is different, which has an impact on their strengths and weaknesses. When deciding which method to use, factors such as the environment and desired precision and speed should be taken into consideration. For example, in controlled artificial environments, which do not change significantly, navigation precision and speed are more important than robustness to environment variations. However, the appearance of unstructured natural environments varies over time, making robustness to changes a crucial property for outdoor navigation systems. This paper compares the speed, precision, reliability, robustness, and practicality of the available teach and repeat methods. We will outline their flaws and strengths, helping to choose the most suitable method for a particular utilization.
Real Time Fiducial Marker Localisation System with Full 6 DOF Pose Estimation
- Autoři: Ing. Jiří Ulrich, Ing. Jan Blaha, Alsayed, A., Ing. Tomáš Rouček, Ph.D., Arvin, F., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: ACM SIGAPP Applied Computing Review. 2023, 23(1), 20-35. ISSN 1559-6915.
- Rok: 2023
- DOI: 10.1145/3594264.3594266
- Odkaz: https://doi.org/10.1145/3594264.3594266
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The ability to reliably determine its own position, as well as the position of surrounding objects, is crucial for any autonomous robot. While this can be achieved with a certain degree of reliability, augmenting the environment with artificial markers that make these tasks easier is often practical. This applies especially to the evaluation of robotic experiments, which often require exact ground truth data containing the positions of the robots. This paper proposes a new method for estimating the position and orientation of circular fiducial markers in 3D space. Simulated and real experiments show that our method achieved three times lower localisation error than the method it derived from. The experiments also indicate that our method outperforms state-of-the-art systems in terms of orientation estimation precision while maintaining similar or better accuracy in position estimation. Moreover, our method is computationally efficient, allowing it to detect and localise several markers in a fraction of the time required by the state-of-the-art fiducial markers. Furthermore, the presented method requires only an off-the-shelf camera and printed tags, can be quickly set up and works in natural light conditions outdoors. These properties make it a viable alternative to expensive high-end localisation systems.
A Vision-based System for Social Insect Tracking
- Autoři: Žampachů, K., Ing. Jiří Ulrich, Ing. Tomáš Rouček, Ph.D., Stefanec, M., Dvořáček, D., Fedotoff, L., Hofstadler, D.N., Rekabi-Bana, F., Broughton, G., Arvin, F., Schmickl, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2022 2nd International Conference on Robotics, Automation and Artificial Intelligence. IEEE Xplore, 2022. p. 277-283. ISBN 978-1-6654-5944-0.
- Rok: 2022
- DOI: 10.1109/RAAI56146.2022.10092977
- Odkaz: https://doi.org/10.1109/RAAI56146.2022.10092977
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Socia1 insects, especially honeybees, play an essential role in nature, and their recent decline threatens the stability of many ecosystems. The behaviour of social insect colonies is typically governed by a central individual, e.g., by the honeybee queen. The RoboRoyale project aims to use robots to interact with the queen to affect her behaviour and the entire colony’s activity. This paper presents a necessary component of such a robotic system, a method capable of real-time detection, localisation, and tracking of the honeybee queen inside a large colony. To overcome problems with occlusions and computational complexity, we propose to combine two vision-based methods for fiducial marker localisation and tracking. The experiments performed on the data captured from inside the beehives demonstrate that the resulting algorithm outperforms its predecessors in terms of detection precision, recall, and localisation accuracy. The achieved performance allowed us to integrate the method into a larger system capable of physically tracking a honeybee queen inside its colony. The ability to observe the queen in fine detail for prolonged periods of time already resulted in unique observations of queen-worker interactions. The knowledge will be crucial in designing a system capable of interacting with the honeybee queen and affecting her activity.
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
- Autoři: Ing. Zdeněk Rozsypálek, Broughton, G., Linder, P., Ing. Tomáš Rouček, Ph.D., Ing. Jan Blaha, Mentzl, L., Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(8), ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22082975
- Odkaz: https://doi.org/10.3390/s22082975
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions
- Autoři: Broughton, G., Ing. Jiří Janota, Ing. Jan Blaha, Ing. Tomáš Rouček, Ph.D., Simon, M., Vintr, T., Yang, T., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(22), 1-22. ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22228855
- Odkaz: https://doi.org/10.3390/s22228855
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.
Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation
- Autoři: Ing. Tomáš Rouček, Ph.D., Amjadi, A., Ing. Zdeněk Rozsypálek, Broughton, G., Ing. Jan Blaha, Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(8), ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22082836
- Odkaz: https://doi.org/10.3390/s22082836
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
Semi-supervised Learning for Image Alignment in Teach and Repeat Navigation
- Autoři: Ing. Zdeněk Rozsypálek, Broughton, G., Linder, P., Ing. Tomáš Rouček, Ph.D., Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. New York: ACM, 2022. p. 731-738. ISBN 978-1-4503-8713-2.
- Rok: 2022
- DOI: 10.1145/3477314.3507045
- Odkaz: https://doi.org/10.1145/3477314.3507045
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Visual teach and repeat navigation (VT&R) is a framework that enables mobile robots to traverse previously learned paths. In principle, it relies on computer vision techniques that can compare the camera's current view to a model based on the images captured during the teaching phase. However, these techniques are usually not robust enough when significant changes occur in the environment between the teach and repeat phases. In this paper, we show that contrastive learning methods can learn how the environment changes and improve the robustness of a VT&R framework. We apply a fully convolutional Siamese network to register the images of the teaching and repeat phases. Their horizontal displacement between the images is then used in a visual servoing manner to keep the robot on the intended trajectory. The experiments performed on several datasets containing seasonal variations indicate that our method outperforms state-of-the-art algorithms tailored to the purpose of registering images captured in different seasons.
Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation
- Autoři: Vintr, T., Ing. Jan Blaha, Ing. Martin Rektoris, Ing. Jiří Ulrich, Ing. Tomáš Rouček, Ph.D., Broughton, G., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Frontiers in Robotics and AI. 2022, 9 ISSN 2296-9144.
- Rok: 2022
- DOI: 10.3389/frobt.2022.890013
- Odkaz: https://doi.org/10.3389/frobt.2022.890013
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot's presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people's natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people's presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.
Learning to see through the haze: Multi-sensor learning-fusion System for Vulnerable Traffic Participant Detection in Fog
- Autoři: Broughton, G., Majer, F., Ing. Tomáš Rouček, Ph.D., Ruichek, Y., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Robotics and Autonomous Systems. 2021, 136 ISSN 0921-8890.
- Rok: 2021
- DOI: 10.1016/j.robot.2020.103687
- Odkaz: https://doi.org/10.1016/j.robot.2020.103687
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
We present an experimental investigation of a multi-sensor fusion-learning system for detecting pedestrians in foggy weather conditions. The method combines two pipelines for people detection running on two different sensors commonly found on moving vehicles: lidar and radar. The two pipelines are not only combined by sensor fusion, but information from one pipeline is used to train the other. We build upon our previous work, where we showed that a lidar pipeline can be used to train a Support Vector Machine (SVM)-based pipeline to interpret radar data, which is useful when conditions then become unfavourable to the original lidar pipeline. In this paper, we test the method on a wider range of conditions, such as from a moving vehicle, and with multiple people present. Additionally, we also compare how the traditional SVM performs interpreting the radar data versus a modern deep neural network on these experiments. Our experiments indicate that either of the approaches results in progressive improvement in the performance during normal operation. Further, our experiments indicate that in the event of the loss of information from a sensor, pedestrian detection and position estimation is still effective.
Robust Image Alignment for Outdoor Teach-and-Repeat Navigation
- Autoři: Broughton, G., Linder, P., Ing. Tomáš Rouček, Ph.D., Vintr, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Proceedings of the 10th European Conference on Mobile Robots. Brussels: IEEE, 2021. ISBN 978-1-6654-1213-1.
- Rok: 2021
- DOI: 10.1109/ECMR50962.2021.9568832
- Odkaz: https://doi.org/10.1109/ECMR50962.2021.9568832
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Visual Teach-and-Repeat robot navigation suffers from environmental changes over time, and it struggles in real-world long-term deployments. We propose a robust robot bearing correction method based on traditional principles aided by exploiting the abstraction from higher layers of widely available pre-trained Convolutional Neural Networks (CNNs). Our method applies a two-dimensional Discrete Fast Fourier Transform based approach over several different convolution filters from higher levels of a CNN to robustly estimate the alignment between two corresponding images. The method also estimates its uncertainty, which is essential for the navigation system to decide how much it can trust the bearing correction. We show that our "learning-free" method is comparable with the state-of-the-art methods when the environmental conditions are changed only slightly, but it out-performs them at night.
CHRONOROBOTICS: Representing the Structure of Time for Service Robots
- Autoři: doc. Ing. Tomáš Krajník, Ph.D., Vintr, T., Broughton, G., Majer, F., Ing. Tomáš Rouček, Ph.D., Ing. Jiří Ulrich, Ing. Jan Blaha, Pěčonková, V., Ing. Martin Rektoris,
- Publikace: ISCSIC 2020: Proceedings of the 2020 4th International Symposium on Computer Science and Intelligent Control. New York: Association for Computing Machinery, 2020. ISBN 978-1-4503-8889-4.
- Rok: 2020
- DOI: 10.1145/3440084.3441195
- Odkaz: https://doi.org/10.1145/3440084.3441195
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Chronorobotics is the investigation of scientific methods allowing robots to adapt to and learn from the perpetual changes occurring in natural and human-populated environments. We present methods that can introduce the notion of dynamics into spatial environment models, resulting in representations which provide service robots with the ability to predict future states of changing environments. Several long-term experiments indicate that the aforementioned methods gradually improve the efficiency of robots' autonomous operations over time. More importantly, the experiments indicate that chronorobotic concepts improve robots' ability to seamlessly merge into human-populated environments, which is important for their integration and acceptance in human societies
DARPA Subterranean Challenge: Multi-robotic exploration of underground environments
- Autoři: Ing. Tomáš Rouček, Ph.D., Mgr. Martin Pecka, Ph.D., Čížek, P., Petříček, T., Ing. Jan Bayer, Šalanský, V., Ing. Daniel Heřt, Ing. Matěj Petrlík, Ph.D., Ing. Tomáš Báča, Ph.D., Spurný, V., Pomerleau, F., Kubelka, V., prof. Ing. Jan Faigl, Ph.D., doc. Ing. Karel Zimmermann, Ph.D., doc. Ing. Martin Saska, Dr. rer. nat., prof. Ing. Tomáš Svoboda, Ph.D., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 6th International Workshop on Modelling and Simulation for Autonomous Systems. Wien: Springer, 2020. p. 274-290. ISSN 1611-3349. ISBN 9783030438890.
- Rok: 2020
- DOI: 10.1007/978-3-030-43890-6_22
- Odkaz: https://doi.org/10.1007/978-3-030-43890-6_22
- Pracoviště: Centrum umělé inteligence, Vidění pro roboty a autonomní systémy, Multirobotické systémy
-
Anotace:
The Subterranean Challenge (SubT) is a contest organised by the Defense Advanced Research Projects Agency (DARPA). The contest reflects the requirement of increasing safety and efficiency of underground search-and-rescue missions. In the SubT challenge, teams of mobile robots have to detect, localise and report positions of specific objects in an underground environment. This paper provides a description of the multi-robot heterogeneous exploration system of our CTU-CRAS team, which scored third place in the Tunnel Circuit round, surpassing the performance of all other non-DARPA-funded competitors. In addition to the description of the platforms, algorithms and strategies used, we also discuss the lessons-learned by participating at such contest.