All publications

Federated Reinforcement Learning for Collective Navigation of Robotic Swarms

  • DOI: 10.1109/TCDS.2023.3239815
  • Link: https://doi.org/10.1109/TCDS.2023.3239815
  • Department: Department of Computer Science, Artificial Intelligence Center
  • Annotation:
    The recent advancement of deep reinforcement learning (DRL) contributed to robotics by allowing automatic controller design. The automatic controller design is a crucial approach for designing swarm robotic systems, which require more complex controllers than a single robot system to lead a desired collective behavior. Although the DRL-based controller design method showed its effectiveness for swarm robotic systems, the reliance on the central training server is a critical problem in real-world environments where robot-server communication is unstable or limited. We propose a novel federated learning (FL)-based DRL training strategy federated learning DDPG (FLDDPG) for use in swarm robotic applications. Through the comparison with baseline strategies under a limited communication bandwidth scenario, it is shown that the FLDDPG method resulted in higher robustness and generalization ability into a different environment and real robots, while the baseline strategies suffer from the limitation of communication bandwidth. This result suggests that the proposed method can benefit swarm robotic systems operating in environments with limited communication bandwidth, e.g., in high radiation, underwater, or subterranean environments.

Mechatronic Design for Multi Robots-Insect Swarms Interactions

  • Authors: Rekabi-Bana, F., Stefanec, M., Ing. Jiří Ulrich, Keyvan, E.E., Rouček, T., Broughton, G., Gundeger, B.Y., Sahin, O., Turgut, A.E., Sahin, E., doc. Ing. Tomáš Krajník, Ph.D., Schmickl, T., Arvin, F.
  • Publication: Proceedings of 2023 IEEE International Conference on Mechatronics. IEEE Xplore, 2023. ISBN 978-1-6654-6661-5.
  • Year: 2023
  • DOI: 10.1109/ICM54990.2023.10102026
  • Link: https://doi.org/10.1109/ICM54990.2023.10102026
  • Department: Department of Computer Science, Artificial Intelligence Center
  • Annotation:
    This paper presents the concept of a robotic system collaborating with a swarm of social insects inside their hive. This robot consists of a micro- and macro-manipulator and a tracking system. The micro-manipulator uses bio-mimetic agents to interact with an individual specimen. The macro-manipulator positions and keeps the micro-manipulator's base around the given individual while moving in the hive. This individual is tracked by a fiducial marker-based visual detection and localisation system, which also provides positions of the bio-mimetic agents. The base of the system was experimentally verified in a honeybee observation hive, where it flawlessly tracked the honeybee queen for several hours, gathering sufficient data to extract the behaviours of honeybee workers in the queen's vicinity. These data were then used in simulation to verify if the micro-manipulator's bio-mimetic agents could mimic some of the honeybee workers' behaviours.

Real Time Fiducial Marker Localisation System with Full 6 DOF Pose Estimation

  • DOI: 10.1145/3594264.3594266
  • Link: https://doi.org/10.1145/3594264.3594266
  • Department: Department of Computer Science, Artificial Intelligence Center
  • Annotation:
    The ability to reliably determine its own position, as well as the position of surrounding objects, is crucial for any autonomous robot. While this can be achieved with a certain degree of reliability, augmenting the environment with artificial markers that make these tasks easier is often practical. This applies especially to the evaluation of robotic experiments, which often require exact ground truth data containing the positions of the robots. This paper proposes a new method for estimating the position and orientation of circular fiducial markers in 3D space. Simulated and real experiments show that our method achieved three times lower localisation error than the method it derived from. The experiments also indicate that our method outperforms state-of-the-art systems in terms of orientation estimation precision while maintaining similar or better accuracy in position estimation. Moreover, our method is computationally efficient, allowing it to detect and localise several markers in a fraction of the time required by the state-of-the-art fiducial markers. Furthermore, the presented method requires only an off-the-shelf camera and printed tags, can be quickly set up and works in natural light conditions outdoors. These properties make it a viable alternative to expensive high-end localisation systems.

A Vision-based System for Social Insect Tracking

  • Authors: Žampachů, K., Ing. Jiří Ulrich, Rouček, T., Stefanec, M., Dvořáček, D., Fedotoff, L., Hofstadler, D.N., Rekabi-Bana, F., Broughton, G., Arvin, F., Schmickl, T., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publication: 2022 2nd International Conference on Robotics, Automation and Artificial Intelligence. IEEE Xplore, 2022. p. 277-283. ISBN 978-1-6654-5944-0.
  • Year: 2022
  • DOI: 10.1109/RAAI56146.2022.10092977
  • Link: https://doi.org/10.1109/RAAI56146.2022.10092977
  • Department: Department of Computer Science, Artificial Intelligence Center
  • Annotation:
    Socia1 insects, especially honeybees, play an essential role in nature, and their recent decline threatens the stability of many ecosystems. The behaviour of social insect colonies is typically governed by a central individual, e.g., by the honeybee queen. The RoboRoyale project aims to use robots to interact with the queen to affect her behaviour and the entire colony’s activity. This paper presents a necessary component of such a robotic system, a method capable of real-time detection, localisation, and tracking of the honeybee queen inside a large colony. To overcome problems with occlusions and computational complexity, we propose to combine two vision-based methods for fiducial marker localisation and tracking. The experiments performed on the data captured from inside the beehives demonstrate that the resulting algorithm outperforms its predecessors in terms of detection precision, recall, and localisation accuracy. The achieved performance allowed us to integrate the method into a larger system capable of physically tracking a honeybee queen inside its colony. The ability to observe the queen in fine detail for prolonged periods of time already resulted in unique observations of queen-worker interactions. The knowledge will be crucial in designing a system capable of interacting with the honeybee queen and affecting her activity.

Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation

  • DOI: 10.3389/frobt.2022.890013
  • Link: https://doi.org/10.3389/frobt.2022.890013
  • Department: Artificial Intelligence Center
  • Annotation:
    Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot's presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people's natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people's presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.

Towards Fast Fiducial Marker with full 6 DOF Pose Estimation

  • DOI: 10.1145/3477314.3507043
  • Link: https://doi.org/10.1145/3477314.3507043
  • Department: Department of Computer Science, Artificial Intelligence Center
  • Annotation:
    This paper proposes a new method for the full 6 degrees of free- dom pose estimation of a circular fiducial marker. This circular black-and-white planar marker provides a unique and versatile identification of individual markers while maintaining a real-time detection. Such a marker and the vision localisation system based on it is suitable for both external and self-localisation. Together with an off-the-shelf camera, the marker aims to provide a sufficient pose estimation accuracy to substitute the current high-end locali sation systems. In order to assess the performance of our proposed marker system, we evaluate its capabilities against the current state of-the-art methods in terms of their ability to estimate the 2D and 3D positions. For such purpose, a real-world dataset, inspired by typical applications in mobile and swarm robotics, was collected as the performance under the real conditions provides better insights into the method’s potential than an artificially simulated environ ment. The experiments performed show that the method presented here achieved three times the accuracy of the marker it was derived from.

Bio-inspired Artificial Pheromone System for Swarm Robotics Applications

  • DOI: 10.1177/1059712320918936
  • Link: https://doi.org/10.1177/1059712320918936
  • Department: Artificial Intelligence Center
  • Annotation:
    Pheromones are chemical substances released into the environment by an individual animal, which elicit stereotyped behaviours widely found across the animal kingdom. Inspired by the effective use of pheromones in social insects, pheromonal communication has been adopted to swarm robotics domain using diverse approaches such as alcohol, RFID tags and light. COS phi is one of the light-based artificial pheromone systems which can emulate realistic pheromones and environment properties through the system. This article provides a significant improvement to the state-of-the-art by proposing a novel artificial pheromone system that simulates pheromones with environmental effects by adopting a model of spatio-temporal development of pheromone derived from a flow of fluid in nature. Using the proposed system, we investigated the collective behaviour of a robot swarm in a bio-inspired aggregation scenario, where robots aggregated on a circular pheromone cue with different environmental factors, that is, diffusion and pheromone shift. The results demonstrated the feasibility of the proposed pheromone system for use in swarm robotic applications.

CHRONOROBOTICS: Representing the Structure of Time for Service Robots

  • DOI: 10.1145/3440084.3441195
  • Link: https://doi.org/10.1145/3440084.3441195
  • Department: Artificial Intelligence Center
  • Annotation:
    Chronorobotics is the investigation of scientific methods allowing robots to adapt to and learn from the perpetual changes occurring in natural and human-populated environments. We present methods that can introduce the notion of dynamics into spatial environment models, resulting in representations which provide service robots with the ability to predict future states of changing environments. Several long-term experiments indicate that the aforementioned methods gradually improve the efficiency of robots' autonomous operations over time. More importantly, the experiments indicate that chronorobotic concepts improve robots' ability to seamlessly merge into human-populated environments, which is important for their integration and acceptance in human societies

Natural Criteria for Comparison of Pedestrian Flow Forecasting Models

  • Authors: Vintr, T., Yan, Z., Eyisoy, K., Kubiš, F., Ing. Jan Blaha, Ing. Jiří Ulrich, Swaminathan, C., Molina, S., Kucner, T.P., Magnusson, M., Cielniak, G., prof. Ing. Jan Faigl, Ph.D., Duckett, T., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publication: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Robotics and Automation Society, 2020. p. 11197-11204. ISSN 2153-0866. ISBN 978-1-7281-6212-6.
  • Year: 2020
  • DOI: 10.1109/IROS45743.2020.9341672
  • Link: https://doi.org/10.1109/IROS45743.2020.9341672
  • Department: Artificial Intelligence Center
  • Annotation:
    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-the-art pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.

Adaptive Image Processing Methods for Outdoor Autonomous Vehicles

  • Authors: Halodová, L., Dvořáková, E., Majer, F., Ing. Jiří Ulrich, Vintr, T., Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publication: Modelling and Simulation for Autonomous Systems. Basel: Springer, 2019. p. 456-476. LNCS. vol. 11472. ISSN 0302-9743. ISBN 978-3-030-14983-3.
  • Year: 2019
  • DOI: 10.1007/978-3-030-14984-0_34
  • Link: https://doi.org/10.1007/978-3-030-14984-0_34
  • Department: Artificial Intelligence Center
  • Annotation:
    This paper concerns adaptive image processing for visual teach-and-repeat navigation systems of autonomous vehicles operating outdoors. The robustness and the accuracy of these systems rely on their ability to extract relevant information from the on-board camera images, which is then used for the autonomous navigation and the map building. In this paper, we present methods that allow an image-based navigation system to adapt to a varying appearance of outdoor environments caused by dynamic illumination conditions and naturally occurring environment changes. In the performed experiments, we demonstrate that the adaptive and the learning methods for camera parameter control, image feature extraction and environment map refinement allow autonomous vehicles to operate in real, changing world for extended periods of time.

Time-varying Pedestrian Flow Models for Service Robots

  • Authors: Vintr, T., Molina, S., Senanayake, R., Broughton, G., Yan, Z., Ing. Jiří Ulrich, Kucner, T.P., Swaminathan, C.S., Majer, F., Stachová, M., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publication: Proceedings of European Conference on Mobile Robots. Prague: Czech Technical University, 2019. ISBN 978-1-7281-3605-9.
  • Year: 2019
  • DOI: 10.1109/ECMR.2019.8870909
  • Link: https://doi.org/10.1109/ECMR.2019.8870909
  • Department: Artificial Intelligence Center
  • Annotation:
    We present a human-centric spatiotemporal model for service robots operating in densely populated environments for long time periods. The method integrates observations of pedestrians performed by a mobile robot at different locations and times into a memory efficient model, that represents the spatial layout of natural pedestrian flows and how they change over time. To represent temporal variations of the observed flows, our method does not model the time in a linear fashion, but by several dimensions wrapped into themselves. This representation of time can capture long-term (i.e. days to weeks) periodic patterns of peoples' routines and habits. Knowledge of these patterns allows making long-term predictions of future human presence and walking directions, which can support mobile robot navigation in human-populated environments. Using datasets gathered for several weeks, we compare the model to state-of-the-art methods for pedestrian flow modelling.

Responsible person Ing. Mgr. Radovan Suk