Lidé
Ing. Jan Blaha
Všechny publikace
Autonomous Tracking of Honey Bee Behaviors over Long-term Periods with Cooperating Robots
- Autoři: Ing. Jiří Ulrich, Stefanec, M., Rekabi-Bana, F., Fedotoff, L.A., Ing. Tomáš Rouček, Ph.D., Gündeğer, B.Y., Saadat, M., Ing. Jan Blaha, Ing. Jiří Janota, Hofstadler, N., Žampachů, K., Keyvan, E.E., Erdem, B., Sahin, E., Alemdar, H., Turgut, A.E., Arvin, F., Schmickl, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Science Robotics. 2024, 9(95), ISSN 2470-9476.
- Rok: 2024
- DOI: 10.1126/scirobotics.adn6848
- Odkaz: https://doi.org/10.1126/scirobotics.adn6848
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Digital and mechatronic methods, paired with artificial intelligence and machine learning, are game-changing technologies in behavioral science. The central element of the most important pollinator species (honeybees) is the colony’s queen. The behavioral strategies of these ecologically important organisms are under-researched, due to the complexity of honeybees’ self-regulation and the difficulties of studying queens in their natural colony context. We created an autonomous robotic observation and behavioral analysis system aimed at 24/7 observation of the queen and her interactions with worker bees and comb cells, generating unique behavioral datasets of unprecedented length and quality. Significant key performance indicators of the queen and her social embedding in the colony were gathered by this tailored but also versatile robotic system. Data collected over 24-hour and 30-day periods demonstrate our system’s capability to extract key performance indicators on different system levels: Microscopic, mesoscopic, and macroscopic data are collected in parallel. Additionally, interactions between various agents are also observed and quantified. Long-term continuous observations yield high amounts of high-quality data when performed by an autonomous robot, going significantly beyond feasibly obtainable results of human observation methods or stationary camera systems. This allows a deep understanding of the innermost mechanisms of honeybees’ swarm-intelligent self-regulation as well as studying other ocial insect colonies, biocoenoses and ecosystems in novel ways. Social insects are keystone species in all ecosystems, thus understanding them better will be valuable to monitor, interpret, protect and even to restore our fragile ecosystems globally.
Effective Searching for the Honeybee Queen in a Living Colony
- Autoři: Ing. Jan Blaha, Mikula, J., Vintr, T., Ing. Jiří Janota, Ing. Jiří Ulrich, Ing. Tomáš Rouček, Ph.D., Rekabi-Bana, F., Fedotoff, L.A., Stefanec, M., Schmickl, T., Arvin, F., Kulich, M., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2024 IEEE 20th International Conference on Automation Science and Engineering (CASE). IEEE Xplore, 2024. p. 3675-3682. ISSN 2161-8089. ISBN 979-8-3503-5851-3.
- Rok: 2024
- DOI: 10.1109/CASE59546.2024.10711366
- Odkaz: https://doi.org/10.1109/CASE59546.2024.10711366
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Despite the importance of honeybees as pollinators for the entire ecosystem and their recent decline threatening agricultural production, the dynamics of the living colony are not well understood. In our EU H2020 RoboRoyale project, we aim to support the pollination activity of the honeybees through robots interacting with the core element of the honeybee colony, the honeybee queen. In order to achieve that, we need to understand how the honeybee queen behaves and interacts with the surrounding worker bees. To gather the necessary data, we observe the queen with a moving camera, and occasionally, we instruct the system to perform selective observations elsewhere. In this paper, we deal with the problem of searching for the honeybee queen inside a living colony. We demonstrate that combining spatio-temporal models of queen presence with efficient search methods significantly decreases the time required to find her. This will minimize the chance of missing interesting data on the infrequent behaviors or queen-worker interactions, leading to a better understanding of the queen's behavior over time. Moreover, a faster search for the queen allows the robot to leave her more frequently and gather more data in other areas of the honeybee colony.
Predictive Data Acquisition for Lifelong Visual Teach, Repeat and Learn
- Autoři: Ing. Tomáš Rouček, Ph.D., Ing. Zdeněk Rozsypálek, Ing. Jan Blaha, Ing. Jiří Ulrich, doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: IEEE Robotics and Automation Letters. 2024, 9(11), 10042-10049. ISSN 2377-3766.
- Rok: 2024
- DOI: 10.1109/LRA.2024.3421193
- Odkaz: https://doi.org/10.1109/LRA.2024.3421193
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Nowadays, robots can operate in environments which are not tailored for them. This allows their deployments in changing and human-populated environments, which recent advances in machine learning methods enabled. The efficiency of these methods is largely determined by the quality of their training data. An up-to-date and well-balanced training dataset is paramount for achieving robust robot operation. To achieve long-term operation, the robot has to deal with perpetual environmental changes, forcing it to keep its models up-to-date. We present an exploration method allowing a mobile robot to gather high-quality data to update its models both while performing its duties and when idle, maximizing effectivity. The robot evaluates the quality of the data gathered in the past and based on that, it creates preferences which influence how often these locations are visited. This exploration method was integrated with a self-supervised visual teach-and-repeat pipeline. We show the precision and robustness of visual-based navigation to improve when using machine-learned models trained by our exploration method. Our research resulted in a robotic navigation system that can not only annotate its training data but also ensure that its training dataset is balanced and up-to-date. The codes, datasets, trained models and examples for our experiments can be found online for better reproducibility at .
Toward Perpetual Occlusion-Aware Observation of Comb States in Living Honeybee Colonies
- Autoři: Ing. Jan Blaha, Vintr, T., Mikula, J., Ing. Jiří Janota, Ing. Tomáš Rouček, Ph.D., Ing. Jiří Ulrich, Rekabi-Bana, F., Fedotoff, L.A., Stefanec, M., Schmickl, T., Arvin, F., Kulich, M., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024). Piscataway: IEEE, 2024. p. 5948-5955. ISSN 2153-0866. ISBN 979-8-3503-7770-5.
- Rok: 2024
- DOI: 10.1109/IROS58592.2024.10801380
- Odkaz: https://doi.org/10.1109/IROS58592.2024.10801380
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Honeybees are one of the most important pollinators in the ecosystem. Unfortunately, the dynamics of living honeybee colonies are not well understood due to their complexity and difficulty of observation. In our project 'RoboRoyale', we build and operate a robot to be a part of a bio-hybrid system, which currently observes the honeybee queen in the colony and physically tracks it with a camera. Apart from tracking and observing the queen, the system needs to monitor the state of the honeybee comb which is most of the time occluded by workerbees. This introduces a necessary tradeoff between tracking the queen and visiting the rest of the hive to create a daily map. We aim to collect the necessary data more effectively. We evaluate several mapping methods that consider the previous observations and forecasted densities of bees occluding the view. To predict the presence of bees, we use previously established maps of dynamics developed for autonomy in human-populated environments. Using data from the last observational season, we show significant improvement of the informed comb mapping methods over our current system. This will allow us to use our resources more effectively in the upcoming season.
Towards Robotic Mapping of a Honeybee Comb
- Autoři: Ing. Jiří Janota, Ing. Jan Blaha, Fatemeh, R., Ing. Jiří Ulrich, Stefanec, M., Fedotoff, L., Arvin, F., Schmickl, T., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2024 International Conference on Manipulation, Automation and Robotics at Small Scales. Budapešť: Institute of Electrical and Electronics Engineers Inc., 2024. p. 79-+. ISBN 979-8-3503-7680-7.
- Rok: 2024
- DOI: 10.1109/MARSS61851.2024.10612712
- Odkaz: https://doi.org/10.1109/MARSS61851.2024.10612712
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Honeybees are irreplaceable pollinators with a direct impact on the global food supply.Researchers focus on understanding the dynamics of colonies to support their health and growth.In our project “RoboRoyale”, we aim to strengthen the colony through miniature robots interacting with the honeybee queen.To assess the colony's health and the effect of the interactions, it is crucial to monitor the whole honeybee comb and its development.In this work, we introduce key components of a system capable of autonomously evaluating the state of the comb without any disturbance to the living colony.We evaluate several methods for visual mapping of the comb by a moving camera and several algorithms for detecting visible cells between occluding bees.By combining image stitching techniques with open cell detection and their localization, we show that it is possible to capture how the comb evolves over time.Our results lay the foundations for real-time monitoring of a honeybee comb, which could prove essential in honeybee and environmental research.
Real Time Fiducial Marker Localisation System with Full 6 DOF Pose Estimation
- Autoři: Ing. Jiří Ulrich, Ing. Jan Blaha, Alsayed, A., Ing. Tomáš Rouček, Ph.D., Arvin, F., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: ACM SIGAPP Applied Computing Review. 2023, 23(1), 20-35. ISSN 1559-6915.
- Rok: 2023
- DOI: 10.1145/3594264.3594266
- Odkaz: https://doi.org/10.1145/3594264.3594266
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The ability to reliably determine its own position, as well as the position of surrounding objects, is crucial for any autonomous robot. While this can be achieved with a certain degree of reliability, augmenting the environment with artificial markers that make these tasks easier is often practical. This applies especially to the evaluation of robotic experiments, which often require exact ground truth data containing the positions of the robots. This paper proposes a new method for estimating the position and orientation of circular fiducial markers in 3D space. Simulated and real experiments show that our method achieved three times lower localisation error than the method it derived from. The experiments also indicate that our method outperforms state-of-the-art systems in terms of orientation estimation precision while maintaining similar or better accuracy in position estimation. Moreover, our method is computationally efficient, allowing it to detect and localise several markers in a fraction of the time required by the state-of-the-art fiducial markers. Furthermore, the presented method requires only an off-the-shelf camera and printed tags, can be quickly set up and works in natural light conditions outdoors. These properties make it a viable alternative to expensive high-end localisation systems.
Bootstrapped Learning for Car Detection in Planar Lidars
- Autoři: Broughton, G., Ing. Jiří Janota, Ing. Jan Blaha, Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. New York: ACM, 2022. p. 758-765. ISBN 978-1-4503-8713-2.
- Rok: 2022
- DOI: 10.1145/3477314.3507312
- Odkaz: https://doi.org/10.1145/3477314.3507312
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
We present a proof-of-concept method for using bootstrapped learning for car detection in lidar scans using neural networks. We transfer knowledge from a traditional hand-engineered clustering and geometry-based detection technique to deep-learning-based methods. The geometry-based method automatically annotates laserscans from a vehicle travelling around a static car park over a long period of time. We use these annotations to automatically train the deep-learning neural network and evaluate and compare this method against the original geometrical method in various weather conditions. Furthermore, by using temporal filters, we can find situations where the original method was struggling or giving intermittent detections and still automatically annotate these frames and use them as part of the training process. Our evaluation indicates an increased detection accuracy and robustness as sensing conditions deteriorate compared to the method from which trained the neural network.
Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation
- Autoři: Ing. Zdeněk Rozsypálek, Broughton, G., Linder, P., Ing. Tomáš Rouček, Ph.D., Ing. Jan Blaha, Mentzl, L., Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(8), ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22082975
- Odkaz: https://doi.org/10.3390/s22082975
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.
Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions
- Autoři: Broughton, G., Ing. Jiří Janota, Ing. Jan Blaha, Ing. Tomáš Rouček, Ph.D., Simon, M., Vintr, T., Yang, T., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(22), 1-22. ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22228855
- Odkaz: https://doi.org/10.3390/s22228855
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.
Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation
- Autoři: Ing. Tomáš Rouček, Ph.D., Amjadi, A., Ing. Zdeněk Rozsypálek, Broughton, G., Ing. Jan Blaha, Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Sensors. 2022, 22(8), ISSN 1424-8220.
- Rok: 2022
- DOI: 10.3390/s22082836
- Odkaz: https://doi.org/10.3390/s22082836
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation
- Autoři: Vintr, T., Ing. Jan Blaha, Ing. Martin Rektoris, Ing. Jiří Ulrich, Ing. Tomáš Rouček, Ph.D., Broughton, G., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Frontiers in Robotics and AI. 2022, 9 ISSN 2296-9144.
- Rok: 2022
- DOI: 10.3389/frobt.2022.890013
- Odkaz: https://doi.org/10.3389/frobt.2022.890013
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot's presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people's natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people's presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.
Boosting the Performance of Object Detection CNNs with Context-Based Anomaly Detection
- Autoři: Ing. Jan Blaha, Broughton, G., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. Springer Nature, 2021. p. 159-176. 349. ISSN 1867-8211. ISBN 978-3-030-67536-3.
- Rok: 2021
- DOI: 10.1007/978-3-030-67537-0_11
- Odkaz: https://doi.org/10.1007/978-3-030-67537-0_11
- Pracoviště: Centrum umělé inteligence
-
Anotace:
In this paper, we employ anomaly detection methods to enhance the ability of object detectors by using the context of their detections. This has numerous potential applications from boosting the performance of standard object detectors, to the preliminary validation of annotation quality, and even for robotic exploration and object search. We build our method on autoencoder networks for detecting anomalies, where we do not try to filter incoming data based on anomality score as is usual, but instead, we focus on the individual features of the data representing an actual scene. We show that one can teach autoencoders about the contextual relationship of objects in images, i.e. the likelihood of co-detecting classes in the same scene. This can then be used to identify detections that do and do not fit with the rest of the current observations in the scene. We show that the use of this information yields better results than using traditional thresholding when deciding if weaker detections are actually classed as observed or not. The experiments performed not only show that our method significantly improves the performance of CNN object detectors, but that it can be used as an efficient tool to discover incorrectly-annotated images
CHRONOROBOTICS: Representing the Structure of Time for Service Robots
- Autoři: doc. Ing. Tomáš Krajník, Ph.D., Vintr, T., Broughton, G., Majer, F., Ing. Tomáš Rouček, Ph.D., Ing. Jiří Ulrich, Ing. Jan Blaha, Pěčonková, V., Ing. Martin Rektoris,
- Publikace: ISCSIC 2020: Proceedings of the 2020 4th International Symposium on Computer Science and Intelligent Control. New York: Association for Computing Machinery, 2020. ISBN 978-1-4503-8889-4.
- Rok: 2020
- DOI: 10.1145/3440084.3441195
- Odkaz: https://doi.org/10.1145/3440084.3441195
- Pracoviště: Katedra počítačů, Centrum umělé inteligence
-
Anotace:
Chronorobotics is the investigation of scientific methods allowing robots to adapt to and learn from the perpetual changes occurring in natural and human-populated environments. We present methods that can introduce the notion of dynamics into spatial environment models, resulting in representations which provide service robots with the ability to predict future states of changing environments. Several long-term experiments indicate that the aforementioned methods gradually improve the efficiency of robots' autonomous operations over time. More importantly, the experiments indicate that chronorobotic concepts improve robots' ability to seamlessly merge into human-populated environments, which is important for their integration and acceptance in human societies
Natural Criteria for Comparison of Pedestrian Flow Forecasting Models
- Autoři: Vintr, T., Yan, Z., Eyisoy, K., Kubiš, F., Ing. Jan Blaha, Ing. Jiří Ulrich, Swaminathan, C., Molina, S., Kucner, T.P., Magnusson, M., Cielniak, G., prof. Ing. Jan Faigl, Ph.D., Duckett, T., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
- Publikace: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Robotics and Automation Society, 2020. p. 11197-11204. ISSN 2153-0866. ISBN 978-1-7281-6212-6.
- Rok: 2020
- DOI: 10.1109/IROS45743.2020.9341672
- Odkaz: https://doi.org/10.1109/IROS45743.2020.9341672
- Pracoviště: Centrum umělé inteligence
-
Anotace:
Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-the-art pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.