Lidé

doc. Ing. Tomáš Krajník, Ph.D.

Všechny publikace

A Minimally Invasive Approach Towards "Ecosystem Hacking" With Honeybees

  • Autoři: Stefanec, M., Hofstadler, D.N., doc. Ing. Tomáš Krajník, Ph.D., Turgut, A.E., Alemdar, H., Lennox, B., Sahin, E., Arvin, F., Schmickl, T.
  • Publikace: Frontiers in Robotics and AI. 2022, 9 1-17. ISSN 2296-9144.
  • Rok: 2022
  • DOI: 10.3389/frobt.2022.791921
  • Odkaz: https://doi.org/10.3389/frobt.2022.791921
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Honey bees live in colonies of thousands of individuals, that not only need to collaborate with each other but also to interact intensively with their ecosystem. A small group of robots operating in a honey bee colony and interacting with the queen bee, a central colony element, has the potential to change the collective behavior of the entire colony and thus also improve its interaction with the surrounding ecosystem. Such a system can be used to study and understand many elements of bee behavior within hives that have not been adequately researched. We discuss here the applicability of this technology for ecosystem protection: A novel paradigm of a minimally invasive form of conservation through "Ecosystem Hacking". We discuss the necessary requirements for such technology and show experimental data on the dynamics of the natural queen's court, initial designs of biomimetic robotic surrogates of court bees, and a multi-agent model of the queen bee court system. Our model is intended to serve as an AI-enhanceable coordination software for future robotic court bee surrogates and as a hardware controller for generating nature-like behavior patterns for such a robotic ensemble. It is the first step towards a team of robots working in a bio-compatible way to study honey bees and to increase their pollination performance, thus achieving a stabilizing effect at the ecosystem level.

Bootstrapped Learning for Car Detection in Planar Lidars

  • Autoři: Broughton, G., Janota, J., Ing. Jan Blaha, Yan, Z., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. New York: ACM, 2022. p. 758-765. ISBN 978-1-4503-8713-2.
  • Rok: 2022
  • DOI: 10.1145/3477314.3507312
  • Odkaz: https://doi.org/10.1145/3477314.3507312
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    We present a proof-of-concept method for using bootstrapped learning for car detection in lidar scans using neural networks. We transfer knowledge from a traditional hand-engineered clustering and geometry-based detection technique to deep-learning-based methods. The geometry-based method automatically annotates laserscans from a vehicle travelling around a static car park over a long period of time. We use these annotations to automatically train the deep-learning neural network and evaluate and compare this method against the original geometrical method in various weather conditions. Furthermore, by using temporal filters, we can find situations where the original method was struggling or giving intermittent detections and still automatically annotate these frames and use them as part of the training process. Our evaluation indicates an increased detection accuracy and robustness as sensing conditions deteriorate compared to the method from which trained the neural network.

Contrastive Learning for Image Registration in Visual Teach and Repeat Navigation

  • DOI: 10.3390/s22082975
  • Odkaz: https://doi.org/10.3390/s22082975
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual teach and repeat navigation (VT&R) is popular in robotics thanks to its simplicity and versatility. It enables mobile robots equipped with a camera to traverse learned paths without the need to create globally consistent metric maps. Although teach and repeat frameworks have been reported to be relatively robust to changing environments, they still struggle with day-to-night and seasonal changes. This paper aims to find the horizontal displacement between prerecorded and currently perceived images required to steer a robot towards the previously traversed path. We employ a fully convolutional neural network to obtain dense representations of the images that are robust to changes in the environment and variations in illumination. The proposed model achieves state-of-the-art performance on multiple datasets with seasonal and day/night variations. In addition, our experiments show that it is possible to use the model to generate additional training examples that can be used to further improve the original model's robustness. We also conducted a real-world experiment on a mobile robot to demonstrate the suitability of our method for VT&R.

Embedding Weather Simulation in Auto-Labelling Pipelines Improves Vehicle Detection in Adverse Conditions

  • DOI: 10.3390/s22228855
  • Odkaz: https://doi.org/10.3390/s22228855
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The performance of deep learning-based detection methods has made them an attractive option for robotic perception. However, their training typically requires large volumes of data containing all the various situations the robots may potentially encounter during their routine operation. Thus, the workforce required for data collection and annotation is a significant bottleneck when deploying robots in the real world. This applies especially to outdoor deployments, where robots have to face various adverse weather conditions. We present a method that allows an independent car tansporter to train its neural networks for vehicle detection without human supervision or annotation. We provide the robot with a hand-coded algorithm for detecting cars in LiDAR scans in favourable weather conditions and complement this algorithm with a tracking method and a weather simulator. As the robot traverses its environment, it can collect data samples, which can be subsequently processed into training samples for the neural networks. As the tracking method is applied offline, it can exploit the detections made both before the currently processed scan and any subsequent future detections of the current scene, meaning the quality of annotations is in excess of those of the raw detections. Along with the acquisition of the labels, the weather simulator is able to alter the raw sensory data, which are then fed into the neural network together with the labels. We show how this pipeline, being run in an offline fashion, can exploit off-the-shelf weather simulation for the auto-labelling training scheme in a simulator-in-the-loop manner. We show how such a framework produces an effective detector and how the weather simulator-in-the-loop is beneficial for the robustness of the detector. Thus, our automatic data annotation pipeline significantly reduces not only the data annotation but also the data collection effort. This allows the integration of deep learning algorithms into existing robotic systems without the need for tedious data annotation and collection in all possible situations. Moreover, the method provides annotated datasets that can be used to develop other methods. To promote the reproducibility of our research, we provide our datasets, codes and models online.

Occlusion-Based Coordination Protocol Design for Autonomous Robotic Shepherding Tasks

  • Autoři: Hu, J., Turgut, A.E., doc. Ing. Tomáš Krajník, Ph.D., Lennox, B., Arvin, F.
  • Publikace: IEEE Transactions on Cognitive and Developmental Systems. 2022, 14(1), 126-135. ISSN 2379-8920.
  • Rok: 2022
  • DOI: 10.1109/TCDS.2020.3018549
  • Odkaz: https://doi.org/10.1109/TCDS.2020.3018549
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    The robotic shepherding problem has earned significant research interest over the last few decades due to its potential application in precision agriculture. In this paper, we first modeled the sheep flocking behavior using adaptive protocols and artificial potential field methods. Then we designed a coordination algorithm for the robotic dogs. An occlusion-based motion control strategy was proposed to herd the sheep to the desired location. Compared to formation based techniques, the proposed control strategy provides more flexibility and efficiency when herding a large number of sheep. Simulation and lab-based experiments, using real robots and global vision-based tracking system, were carried out to validate the effectiveness of the proposed approach.

Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation

  • DOI: 10.3390/s22082836
  • Odkaz: https://doi.org/10.3390/s22082836
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.

Semi-supervised Learning for Image Alignment in Teach and Repeat Navigation

  • DOI: 10.1145/3477314.3507045
  • Odkaz: https://doi.org/10.1145/3477314.3507045
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual teach and repeat navigation (VT&R) is a framework that enables mobile robots to traverse previously learned paths. In principle, it relies on computer vision techniques that can compare the camera's current view to a model based on the images captured during the teaching phase. However, these techniques are usually not robust enough when significant changes occur in the environment between the teach and repeat phases. In this paper, we show that contrastive learning methods can learn how the environment changes and improve the robustness of a VT&R framework. We apply a fully convolutional Siamese network to register the images of the teaching and repeat phases. Their horizontal displacement between the images is then used in a visual servoing manner to keep the robot on the intended trajectory. The experiments performed on several datasets containing seasonal variations indicate that our method outperforms state-of-the-art algorithms tailored to the purpose of registering images captured in different seasons.

Toward Benchmarking of Long-Term Spatio-Temporal Maps of Pedestrian Flows for Human-Aware Navigation

  • DOI: 10.3389/frobt.2022.890013
  • Odkaz: https://doi.org/10.3389/frobt.2022.890013
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Despite the advances in mobile robotics, the introduction of autonomous robots in human-populated environments is rather slow. One of the fundamental reasons is the acceptance of robots by people directly affected by a robot's presence. Understanding human behavior and dynamics is essential for planning when and how robots should traverse busy environments without disrupting people's natural motion and causing irritation. Research has exploited various techniques to build spatio-temporal representations of people's presence and flows and compared their applicability to plan optimal paths in the future. Many comparisons of how dynamic map-building techniques show how one method compares on a dataset versus another, but without consistent datasets and high-quality comparison metrics, it is difficult to assess how these various methods compare as a whole and in specific tasks. This article proposes a methodology for creating high-quality criteria with interpretable results for comparing long-term spatio-temporal representations for human-aware path planning and human-aware navigation scheduling. Two criteria derived from the methodology are then applied to compare the representations built by the techniques found in the literature. The approaches are compared on a real-world, long-term dataset, and the conception is validated in a field experiment on a robotic platform deployed in a human-populated environment. Our results indicate that continuous spatio-temporal methods independently modeling spatial and temporal phenomena outperformed other modeling approaches. Our results provide a baseline for future work to compare a wide range of methods employed for long-term navigation and provide researchers with an understanding of how these various methods compare in various scenarios.

Towards Fast Fiducial Marker with full 6 DOF Pose Estimation

  • DOI: 10.1145/3477314.3507043
  • Odkaz: https://doi.org/10.1145/3477314.3507043
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    This paper proposes a new method for the full 6 degrees of free- dom pose estimation of a circular fiducial marker. This circular black-and-white planar marker provides a unique and versatile identification of individual markers while maintaining a real-time detection. Such a marker and the vision localisation system based on it is suitable for both external and self-localisation. Together with an off-the-shelf camera, the marker aims to provide a sufficient pose estimation accuracy to substitute the current high-end locali sation systems. In order to assess the performance of our proposed marker system, we evaluate its capabilities against the current state of-the-art methods in terms of their ability to estimate the 2D and 3D positions. For such purpose, a real-world dataset, inspired by typical applications in mobile and swarm robotics, was collected as the performance under the real conditions provides better insights into the method’s potential than an artificially simulated environ ment. The experiments performed show that the method presented here achieved three times the accuracy of the marker it was derived from.

A Quantifiable Stratification Strategy for Tidy-up in Service Robotics

  • Autoři: Yan, Z., Crombez, N., Buisson, J., Ruichck, Y., doc. Ing. Tomáš Krajník, Ph.D., Sun, L.
  • Publikace: Proceedings of IEEE Workshop on Advanced Robotics and its Social Impacts. IEEE Xplore, 2021. p. 182-187. ISSN 2162-7568. ISBN 978-1-6654-4953-3.
  • Rok: 2021
  • DOI: 10.1109/ARSO51874.2021.9542842
  • Odkaz: https://doi.org/10.1109/ARSO51874.2021.9542842
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper addresses the problem of tidying up a living room in a messy condition with a service robot (i.e. domestic mobile manipulator). One of the key issues in completing such a task is how to continuously select the object to grasp and take it to the delivery area, especially when the robot works in constrained and partially observable environments. In this paper, we propose a quantifiable stratification method that allows the robot to find feasible action plans according to different configurations of objects-deposits, in order to smoothly deliver the objects to the target deposits. Specifically, it leverages a finite-state machine obeying the principle of Occam's razor (called O- FSM), which is designed to integrate arbitrary user-defined action plans typically ranging from simple to complex. Instead of considering a sophisticated model for the ever-changing objects-deposits configuration in the tidy-up task, we empower the robot to make simple yet effective decisions based on its current faced configuration under a generalized framework. Through scenario planning and simulation experiments with the explicitly designed test cases based on the real robot and the real competition scene, the effectiveness of our method is illustrated.

Bio-inspired Artificial Pheromone System for Swarm Robotics Applications

  • DOI: 10.1177/1059712320918936
  • Odkaz: https://doi.org/10.1177/1059712320918936
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Pheromones are chemical substances released into the environment by an individual animal, which elicit stereotyped behaviours widely found across the animal kingdom. Inspired by the effective use of pheromones in social insects, pheromonal communication has been adopted to swarm robotics domain using diverse approaches such as alcohol, RFID tags and light. COS phi is one of the light-based artificial pheromone systems which can emulate realistic pheromones and environment properties through the system. This article provides a significant improvement to the state-of-the-art by proposing a novel artificial pheromone system that simulates pheromones with environmental effects by adopting a model of spatio-temporal development of pheromone derived from a flow of fluid in nature. Using the proposed system, we investigated the collective behaviour of a robot swarm in a bio-inspired aggregation scenario, where robots aggregated on a circular pheromone cue with different environmental factors, that is, diffusion and pheromone shift. The results demonstrated the feasibility of the proposed pheromone system for use in swarm robotic applications.

Boosting the Performance of Object Detection CNNs with Context-Based Anomaly Detection

  • Autoři: Ing. Jan Blaha, Broughton, G., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. Springer Nature, 2021. p. 159-176. 349. ISSN 1867-8211. ISBN 978-3-030-67536-3.
  • Rok: 2021
  • DOI: 10.1007/978-3-030-67537-0_11
  • Odkaz: https://doi.org/10.1007/978-3-030-67537-0_11
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we employ anomaly detection methods to enhance the ability of object detectors by using the context of their detections. This has numerous potential applications from boosting the performance of standard object detectors, to the preliminary validation of annotation quality, and even for robotic exploration and object search. We build our method on autoencoder networks for detecting anomalies, where we do not try to filter incoming data based on anomality score as is usual, but instead, we focus on the individual features of the data representing an actual scene. We show that one can teach autoencoders about the contextual relationship of objects in images, i.e. the likelihood of co-detecting classes in the same scene. This can then be used to identify detections that do and do not fit with the rest of the current observations in the scene. We show that the use of this information yields better results than using traditional thresholding when deciding if weaker detections are actually classed as observed or not. The experiments performed not only show that our method significantly improves the performance of CNN object detectors, but that it can be used as an efficient tool to discover incorrectly-annotated images

Cooperative Pollution Source Exploration and Cleanup with a Bio-inspired Swarm Robot Aggregation

  • Autoři: Sadeghi Amjadi, A., Raufi, M., Turgut, A.E., Broughton, G., doc. Ing. Tomáš Krajník, Ph.D., Arvin, F.
  • Publikace: Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering. Berlin: Springer Science+Business Media, 2021. p. 469-481. ISSN 1867-8211. ISBN 978-3-030-67539-4.
  • Rok: 2021
  • DOI: 10.1007/978-3-030-67540-0_30
  • Odkaz: https://doi.org/10.1007/978-3-030-67540-0_30
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Using robots for exploration of extreme and hazardous environments has the potential to significantly improve human safety. For example, robotic solutions can be deployed to find the source of a chemical leakage and clean the contaminated area. This paper demonstrates a proof-of-concept bio-inspired exploration method using a swarm robotic system based on a combination of two bio-inspired behaviors: aggregation, and pheromone tracking. The main idea of the work presented is to follow pheromone trails to find the source of a chemical leakage and then carry out a decontamination task by aggregating at the critical zone. Using experiments conducted by a simulated model of a Mona robot, the effects of population size and robot speed on the ability of the swarm was evaluated in a decontamination task. The results indicate the feasibility of deploying robotic swarms in an exploration and cleaning task in an extreme environment.

Learning to see through the haze: Multi-sensor learning-fusion System for Vulnerable Traffic Participant Detection in Fog

  • DOI: 10.1016/j.robot.2020.103687
  • Odkaz: https://doi.org/10.1016/j.robot.2020.103687
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    We present an experimental investigation of a multi-sensor fusion-learning system for detecting pedestrians in foggy weather conditions. The method combines two pipelines for people detection running on two different sensors commonly found on moving vehicles: lidar and radar. The two pipelines are not only combined by sensor fusion, but information from one pipeline is used to train the other. We build upon our previous work, where we showed that a lidar pipeline can be used to train a Support Vector Machine (SVM)-based pipeline to interpret radar data, which is useful when conditions then become unfavourable to the original lidar pipeline. In this paper, we test the method on a wider range of conditions, such as from a moving vehicle, and with multiple people present. Additionally, we also compare how the traditional SVM performs interpreting the radar data versus a modern deep neural network on these experiments. Our experiments indicate that either of the approaches results in progressive improvement in the performance during normal operation. Further, our experiments indicate that in the event of the loss of information from a sensor, pedestrian detection and position estimation is still effective.

LIDAR-based Stabilization, Navigation and Localization for UAVs Operating in Dark Indoor Environments

  • DOI: 10.1109/ICUAS51884.2021.9476837
  • Odkaz: https://doi.org/10.1109/ICUAS51884.2021.9476837
  • Pracoviště: Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    Autonomous operation of UAVs in a closed environment requires precise and reliable pose estimate that can stabilize the UAV without using external localization systems such as GNSS. In this work, we are concerned with estimating the pose from laser scans generated by an inexpensive and lightweight LIDAR. We propose a localization system for lightweight (under 200 g) LIDAR sensors with high reliability in arbitrary environments, where other methods fail. The general nature of the proposed method allows deployment in wide array of applications. Moreover, seamless transitioning between different kinds of environments is possible. The advantage of LIDAR localization is that it is robust to poor illumination, which is often challenging for camera-based solutions in dark indoor environments and in the case of the transition between indoor and outdoor environment. Our approach allows executing tasks in poorly-illuminated indoor locations such as historic buildings and warehouses, as well as in the tight outdoor environment, such as forest, where vision-based approaches fail due to large contrast of the scene, and where large well-equipped UAVs cannot be deployed due to the constrained space.

Mobile Manipulator for Autonomous Localization, Grasping and Precise Placement of Construction Material in a Semi-structured Environment

  • DOI: 10.1109/LRA.2021.3061377
  • Odkaz: https://doi.org/10.1109/LRA.2021.3061377
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    Mobile manipulators have the potential to revolutionize modern agriculture, logistics and manufacturing. In this work, we present the design of a ground-based mobile manipulator for automated structure assembly. The proposed system is capable of autonomous localization, grasping, transportation and deployment of construction material in a semi-structured environment. Special effort was put into making the system invariant to lighting changes, and not reliant on external positioning systems. Therefore, the presented system is self-contained and capable of operating in outdoor and indoor conditions alike. Finally, we present means to extend the perceptive radius of the vehicle by using it in cooperation with an autonomous drone, which provides aerial reconnaissance. Performance of the proposed system has been evaluated in a series of experiments conducted in real-world conditions.

Monocular Teach-and-Repeat Navigation using a Deep Steering Network with Scale Estimation

  • Autoři: Zhao, Ch., Sun, L., doc. Ing. Tomáš Krajník, Ph.D., Duckett, T., Yan, Z.
  • Publikace: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE, 2021. p. 2613-2619. ISSN 2153-0866. ISBN 978-1-6654-1714-3.
  • Rok: 2021
  • DOI: 10.1109/IROS51168.2021.9635912
  • Odkaz: https://doi.org/10.1109/IROS51168.2021.9635912
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper proposes a novel monocular teach-and-repeat navigation system with the capability of scale awareness, i.e. the absolute distance between observation and goal images. It decomposes the navigation task into a sequence of visual servoing sub-tasks to approach consecutive goal/node images in a topological map. To be specific, a novel hybrid model, named deep steering network is proposed to infer the navigation primitives according to the learned local feature and scale for each visual servoing sub-task. A novel architecture, Scale-Transformer, is developed to estimate the absolute scale between the observation and goal image pair from a set of matched deep representations to assist repeating navigation. The experiments demonstrate that our scale-aware teach-and-repeat method achieves satisfying navigation accuracy, and converges faster than the monocular methods without scale correction given an inaccurate initial pose.

Robust and Long-term Monocular Teach and Repeat Navigation using a Single-experience Map

  • Autoři: Sun, L., Taher, M., Wild, C., Zhao, C., Zhang, Y., Majer, F., Yan, Z., doc. Ing. Tomáš Krajník, Ph.D., Prescott, T., Duckett, T.
  • Publikace: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE, 2021. p. 2635-2642. ISSN 2153-0866. ISBN 978-1-6654-1714-3.
  • Rok: 2021
  • DOI: 10.1109/IROS51168.2021.9635886
  • Odkaz: https://doi.org/10.1109/IROS51168.2021.9635886
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper presents a robust monocular visual teach-and-repeat (VT&R) navigation system for long-term operation in outdoor environments. The approach leverages deep-learned descriptors to deal with the high illumination variance of the real world. In particular, a tailored self-supervised descriptor, DarkPoint, is proposed for autonomous navigation in outdoor environments. We seamlessly integrate the localisation with control, in which proportional-integral control is used to eliminate the visual error with the pitfall of the unknown depth. Consequently, our approach achieves day-to-night navigation using a single-experience map and is able to repeat complex and fast manoeuvres. To verify our approach, we performed a vast array of navigation experiments in various outdoor environments, where both navigation accuracy and robustness of the proposed system are investigated. The experimental results show that our approach is superior to the baseline method with regards to accuracy and robustness.

Robust Image Alignment for Outdoor Teach-and-Repeat Navigation

  • DOI: 10.1109/ECMR50962.2021.9568832
  • Odkaz: https://doi.org/10.1109/ECMR50962.2021.9568832
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Visual Teach-and-Repeat robot navigation suffers from environmental changes over time, and it struggles in real-world long-term deployments. We propose a robust robot bearing correction method based on traditional principles aided by exploiting the abstraction from higher layers of widely available pre-trained Convolutional Neural Networks (CNNs). Our method applies a two-dimensional Discrete Fast Fourier Transform based approach over several different convolution filters from higher levels of a CNN to robustly estimate the alignment between two corresponding images. The method also estimates its uncertainty, which is essential for the navigation system to decide how much it can trust the bearing correction. We show that our "learning-free" method is comparable with the state-of-the-art methods when the environmental conditions are changed only slightly, but it out-performs them at night.

A Robust UAV System for Operations in a Constrained Environment

  • DOI: 10.1109/LRA.2020.2970980
  • Odkaz: https://doi.org/10.1109/LRA.2020.2970980
  • Pracoviště: Katedra kybernetiky, Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    In this letter we present an autonomous system intended for aerial monitoring, inspection and assistance in Search and Rescue (SAR) operations within a constrained workspace. The proposed system is designed for deployment in demanding real-world environments with extremely narrow passages only slightly wider than the aerial platform, and with limited visibility due to the absence of illumination and the presence of dust. The focus is on precise localization in an unknown environment, high robustness, safety and fast deployment without any need to install an external infrastructure such as an external computer and localization system. These are the main requirements of the targeted SAR scenarios. The performance of the proposed system was successfully evaluated in the Tunnel Circuit of the DARPA Subterranean Challenge, where the UAV cooperated with ground robots to precisely localize artifacts in a coal mine tunnel system. The challenge was unique due to the intention of the organizers to emulate the unpredictable conditions of a real SAR operation, in which there is no prior knowledge of the obstacles that will be encountered.

CHRONOROBOTICS: Representing the Structure of Time for Service Robots

  • DOI: 10.1145/3440084.3441195
  • Odkaz: https://doi.org/10.1145/3440084.3441195
  • Pracoviště: Katedra počítačů, Centrum umělé inteligence
  • Anotace:
    Chronorobotics is the investigation of scientific methods allowing robots to adapt to and learn from the perpetual changes occurring in natural and human-populated environments. We present methods that can introduce the notion of dynamics into spatial environment models, resulting in representations which provide service robots with the ability to predict future states of changing environments. Several long-term experiments indicate that the aforementioned methods gradually improve the efficiency of robots' autonomous operations over time. More importantly, the experiments indicate that chronorobotic concepts improve robots' ability to seamlessly merge into human-populated environments, which is important for their integration and acceptance in human societies

DARPA Subterranean Challenge: Multi-robotic exploration of underground environments

  • DOI: 10.1007/978-3-030-43890-6_22
  • Odkaz: https://doi.org/10.1007/978-3-030-43890-6_22
  • Pracoviště: Centrum umělé inteligence, Vidění pro roboty a autonomní systémy, Multirobotické systémy
  • Anotace:
    The Subterranean Challenge (SubT) is a contest organised by the Defense Advanced Research Projects Agency (DARPA). The contest reflects the requirement of increasing safety and efficiency of underground search-and-rescue missions. In the SubT challenge, teams of mobile robots have to detect, localise and report positions of specific objects in an underground environment. This paper provides a description of the multi-robot heterogeneous exploration system of our CTU-CRAS team, which scored third place in the Tunnel Circuit round, surpassing the performance of all other non-DARPA-funded competitors. In addition to the description of the platforms, algorithms and strategies used, we also discuss the lessons-learned by participating at such contest.

EU Long-term Dataset with Multiple Sensors for Autonomous Driving

  • Autoři: Yan, Z., Sun, L., doc. Ing. Tomáš Krajník, Ph.D., Ruichek, Y.
  • Publikace: Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Los Alamitos: IEEE Computer Society, 2020. p. 10697-10704. ISSN 2153-0858. ISBN 978-1-7281-6212-6.
  • Rok: 2020
  • DOI: 10.1109/IROS45743.2020.9341406
  • Odkaz: https://doi.org/10.1109/IROS45743.2020.9341406
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    The field of autonomous driving has grown tremendously over the past few years, along with the rapid progress in sensor technology. One of the major purposes of using sensors is to provide environment perception for vehicle understanding, learning and reasoning, and ultimately interacting with the environment. In this paper, we first introduce a multisensor platform allowing vehicle to perceive its surroundings and locate itself in a more efficient and accurate way. The platform integrates eleven heterogeneous sensors including various cameras and lidars, a radar, an IMU (Inertial Measurement Unit), and a GPS-RTK (Global Positioning System / Real-Time Kinematic), while exploits a ROS (Robot Operating System) based software to process the sensory data. Then, we present a new dataset (https://epan-utbm.github.io/ utbm_robocar_dataset/) for autonomous driving captured many new research challenges (e.g. highly dynamic environment), and especially for long-term autonomy (e.g. creating and maintaining maps), collected with our instrumented vehicle, publicly available to the community.

Natural Criteria for Comparison of Pedestrian Flow Forecasting Models

  • Autoři: Vintr, T., Yan, Z., Eyisoy, K., Kubiš, F., Ing. Jan Blaha, Ing. Jiří Ulrich, Swaminathan, C., Molina, S., Kucner, T.P., Magnusson, M., Cielniak, G., prof. Ing. Jan Faigl, Ph.D., Duckett, T., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Piscataway: IEEE Robotics and Automation Society, 2020. p. 11197-11204. ISSN 2153-0866. ISBN 978-1-7281-6212-6.
  • Rok: 2020
  • DOI: 10.1109/IROS45743.2020.9341672
  • Odkaz: https://doi.org/10.1109/IROS45743.2020.9341672
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Models of human behaviour, such as pedestrian flows, are beneficial for safe and efficient operation of mobile robots. We present a new methodology for benchmarking of pedestrian flow models based on the afforded safety of robot navigation in human-populated environments. While previous evaluations of pedestrian flow models focused on their predictive capabilities, we assess their ability to support safe path planning and scheduling. Using real-world datasets gathered continuously over several weeks, we benchmark state-of-the-art pedestrian flow models, including both time-averaged and time-sensitive models. In the evaluation, we use the learned models to plan robot trajectories and then observe the number of times when the robot gets too close to humans, using a predefined social distance threshold. The experiments show that while traditional evaluation criteria based on model fidelity differ only marginally, the introduced criteria vary significantly depending on the model used, providing a natural interpretation of the expected safety of the system. For the time-averaged flow models, the number of encounters increases linearly with the percentage operating time of the robot, as might be reasonably expected. By contrast, for the time-sensitive models, the number of encounters grows sublinearly with the percentage operating time, by planning to avoid congested areas and times.

Raindrop Removal With Light Field Image Using Image Inpainting

  • Autoři: Yang, T., Chang, X., Su, H., Crombez, N., Ruichek, Y., doc. Ing. Tomáš Krajník, Ph.D., Yan, Z.
  • Publikace: IEEE Access. 2020, 2020(8), 58416-58426. ISSN 2169-3536.
  • Rok: 2020
  • DOI: 10.1109/ACCESS.2020.2981641
  • Odkaz: https://doi.org/10.1109/ACCESS.2020.2981641
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we propose a method that removes raindrops with light field image using image inpainting. We first use the depth map generated from light field image to detect raindrop regions which are then expressed as a binary mask. The original image with raindrops is improved by refocusing on the far regions and filtering by a high-pass filter. With the binary mask and the enhanced image, image inpainting is then utilized to eliminate raindrops from the original image. We compare pre-trained models of several deep learning based image inpainting methods. A light field raindrop dataset is released to verify our method. Image quality analysis is performed to evaluate the proposed image restoration method. The recovered images are further applied to object detection and visual localization tasks.

A Versatile Visual Navigation System for Autonomous Vehicles

  • Autoři: Majer, F., Halodová, L., Vintr, T., Dlouhý, M., Merenda, L., Fentanes, J.P., Portugal, D., Couceiro, M., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Modelling and Simulation for Autonomous Systems (MESAS 2018). Cham: Springer International Publishing AG, 2019. p. 90-110. ISSN 0302-9743. ISBN 978-3-030-14983-3.
  • Rok: 2019
  • DOI: 10.1007/978-3-030-14984-0_8
  • Odkaz: https://doi.org/10.1007/978-3-030-14984-0_8
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a universal visual navigation method which allows a vehicle to autonomously repeat paths previously taught by a human operator. The method is computationally efficient and does not require camera calibration. It can learn and autonomously traverse arbitrarily shaped paths and is robust to appearance changes induced by varying outdoor illumination and naturally-occurring environment changes. The method does not perform explicit position estimation in the 2d/3d space, but it relies on a novel mathematical theorem, which allows fusing exteroceptive and interoceptive sensory data in a way that ensures navigation accuracy and reliability. The experiments performed indicate that the proposed navigation method can accurately guide different autonomous vehicles along the desired path. The presented system, which was already deployed in patrolling scenarios, is provided as open source at www.github.com/gestom/stroll_bearnav.

Adaptive Image Processing Methods for Outdoor Autonomous Vehicles

  • Autoři: Halodová, L., Dvořáková, E., Majer, F., Ing. Jiří Ulrich, Vintr, T., Kusumam, K., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Modelling and Simulation for Autonomous Systems. Basel: Springer, 2019. p. 456-476. LNCS. vol. 11472. ISSN 0302-9743. ISBN 978-3-030-14983-3.
  • Rok: 2019
  • DOI: 10.1007/978-3-030-14984-0_34
  • Odkaz: https://doi.org/10.1007/978-3-030-14984-0_34
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper concerns adaptive image processing for visual teach-and-repeat navigation systems of autonomous vehicles operating outdoors. The robustness and the accuracy of these systems rely on their ability to extract relevant information from the on-board camera images, which is then used for the autonomous navigation and the map building. In this paper, we present methods that allow an image-based navigation system to adapt to a varying appearance of outdoor environments caused by dynamic illumination conditions and naturally occurring environment changes. In the performed experiments, we demonstrate that the adaptive and the learning methods for camera parameter control, image feature extraction and environment map refinement allow autonomous vehicles to operate in real, changing world for extended periods of time.

Cooperative autonomous search, grasping, and delivering in a treasure hunt scenario by a team of unmanned aerial vehicles

  • DOI: 10.1002/rob.21816
  • Odkaz: https://doi.org/10.1002/rob.21816
  • Pracoviště: Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    This paper addresses the problem of autonomous cooperative localization, grasping and delivering of colored ferrous objects by a team of unmanned aerial vehicles (UAVs). In the proposed scenario, a team of UAVs is required to maximize the reward by collecting colored objects and delivering them to a predefined location. This task consists of several subtasks such as cooperative coverage path planning, object detection and state estimation, UAV self‐localization, precise motion control, trajectory tracking, aerial grasping and dropping, and decentralized team coordination. The failure recovery and synchronization job manager is used to integrate all the presented subtasks together and also to decrease the vulnerability to individual subtask failures in real‐world conditions. The whole system was developed for the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) 2017, where it achieved the highest score and won Challenge No. 3-Treasure Hunt. This paper does not only contain results from the MBZIRC 2017 competition but it also evaluates the system performance in simulations and field tests that were conducted throughout the year‐long development and preparations for the competition.

Extended Artificial Pheromone System for Swarm Robotic Applications

  • Autoři: Na, S., Raoufi, M., Targut, A.E., doc. Ing. Tomáš Krajník, Ph.D., Arvin, F.
  • Publikace: Proceedings of the Artificial Life Conference 2019. MIT Press, 2019. p. 608-615. ISSN 1064-5462.
  • Rok: 2019
  • DOI: 10.1162/isal_a_00228
  • Odkaz: https://doi.org/10.1162/isal_a_00228
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper proposes an artificial pheromone communication system inspired by social insects. The proposed model is an extension of the previously developed pheromone communication system, COS-_. The new model increases COS-Φ flexibility by adding two new features, namely, diffusion and advection. The proposed system consists of an LCD flat screen that is placed horizontally, overhead digital camera to track mobile robots, which move on the screen, and a computer, which simulates the pheromone behaviour and visualises its spatial distribution on the LCD. To investigate the feasibility of the proposed pheromone system, real microrobots, Colias, were deployed which mimicked insects’ role in tracking the pheromone sources. The results showed that, unlike the COS-Φ, the proposed system can simulate the impact of environmental characteristics, such as temperature, atmospheric pressure or wind, on the spatio-temporal distribution of the pheromone. Thus, the system allows studying behaviours of pheromone-based robotic swarms in various real-world conditions.

Learning to See Through Haze: Radar-based Human Detection for Adverse Weather Conditions

  • Autoři: Majer, F., Yan, Z., Broughton, G., Ruichek, Y., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of European Conference on Mobile Robots. Prague: Czech Technical University, 2019. ISBN 978-1-7281-3606-6.
  • Rok: 2019
  • DOI: 10.1109/ECMR.2019.8870954
  • Odkaz: https://doi.org/10.1109/ECMR.2019.8870954
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we present a lifelong-learning multisensor system for pedestrian detection in adverse weather conditions. The proposed method combines two people detection pipelines which process data provided by a lidar and an ultrawideband radar. The outputs of these pipelines are combined not only by means of adaptive sensor fusion, but they can also be used to help one another learn. In particular, the lidar-based detector provides labels to the incoming radar data, efficiently training the radar data classifier. In several experiments, we show that the proposed learning-fusion not only results in a gradual improvement of the system performance during routine operation, but also efficiently deals with lidar detection failures caused by thick fog conditions.

Predictive and Adaptive Maps for Long-term Visual Navigation in Changing Environments

  • Autoři: Halodová, L., Dvořáková, E., Majer, F., Vintr, T., Mozos, O.M., Dayoub, F., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway, NJ: IEEE, 2019. p. 7033-7039. ISSN 2153-0866. ISBN 978-1-7281-4004-9.
  • Rok: 2019
  • DOI: 10.1109/IROS40897.2019.8967994
  • Odkaz: https://doi.org/10.1109/IROS40897.2019.8967994
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we compare different map management techniques for long-term visual navigation in changing environments. In this scenario, the navigation system needs to continuously update and refine its feature map in order to adapt to the environment appearance change. To achieve reliable long-term navigation, the map management techniques have to (i) select features useful for the current navigation task, (ii) remove features that are obsolete, (iii) and add new features from the current camera view to the map. We propose several map management strategies and evaluate their performance with regard to the robot localisation accuracy in long-term teach-and-repeat navigation. Our experiments, performed over three months, indicate that strategies which model cyclic changes of the environment appearance and predict which features are going to be visible at a particular time and location, outperform strategies which do not explicitly model the temporal evolution of the changes.

PΦSS: An Open-Source Experimental Setup for Real-World Implementation of Swarm Robotic Systems in Long-Term Scenarios

  • Autoři: Arvin, F., doc. Ing. Tomáš Krajník, Ph.D., Emre, T.A.
  • Publikace: Modelling and Simulation for Autonomous Systems (MESAS 2018). Cham: Springer International Publishing AG, 2019. p. 351-364. ISSN 0302-9743. ISBN 978-3-030-14983-3.
  • Rok: 2019
  • DOI: 10.1007/978-3-030-14984-0_26
  • Odkaz: https://doi.org/10.1007/978-3-030-14984-0_26
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Swarm robotics is a relatively new research field that employs multiple robots (tens, hundreds or even thousands) that collaborate on complex tasks. There are several issues which limit the real-world application of swarm robotic scenarios, e.g. autonomy time, communication methods, and cost of commercialised robots. We present a platform, which aims to overcome the aforementioned limitations while using off-the-shelf components and freely-available software. The platform combines (i) a versatile open-hardware micro-robot capable of local and global communication, (ii) commercially-available wireless charging modules which provide virtually unlimited robot operation time, (iii) open-source marker-based robot tracking system for automated experiment evaluation, (iv) and a LCD display or a light projector to simulate environmental cues and pheromone communication. To demonstrate the versatility of the system, we present several scenarios, where our system was used.

Spatio-temporal Representation for Long-term Anticipation of Human Presence in Service Robotics

  • Autoři: Vintr, T., Yan, Z., Duckett, T., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of 2019 International Conference on Robotics and Automation. IEEE Xplore, 2019. p. 2620-2626. ISSN 1050-4729. ISBN 978-1-5386-6026-3.
  • Rok: 2019
  • DOI: 10.1109/ICRA.2019.8793534
  • Odkaz: https://doi.org/10.1109/ICRA.2019.8793534
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We propose an efficient spatio-temporal model for mobile autonomous robots operating in human populated environments. Our method aims to model periodic temporal patterns of people presence, which are based on peoples' routines and habits. The core idea is to project the time onto a set of wrapped dimensions that represent the periodicities of people presence. Extending a 2D spatial model with this multidimensional representation of time results in a memory efficient spatio-temporal model. This model is capable of long-term predictions of human presence, allowing mobile robots to schedule their services better and to plan their paths. The experimental evaluation, performed over datasets gathered by a robot over a period of several weeks, indicates that the proposed method achieves more accurate predictions than the previous state of the art used in robotics.

Spatiotemporal Models of Human Activity for Robotic Patrolling

  • Autoři: Vintr, T., Eyisoy, K., Vintrová, V., Ruichek, Y., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Modelling and Simulation for Autonomous Systems (MESAS 2018). Cham: Springer International Publishing AG, 2019. p. 54-64. ISSN 0302-9743. ISBN 978-3-030-14983-3.
  • Rok: 2019
  • DOI: 10.1007/978-3-030-14984-0_5
  • Odkaz: https://doi.org/10.1007/978-3-030-14984-0_5
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a method that allows autonomous systems to detect anomalous events in human-populated environments through understating of their structure and how they change over time. We represent the environment by temporary warped space-hypertime continuous models derived from patterns of changes driven by human activities within the observed space. The ability of the method to detect anomalies is evaluated on real-world datasets gathered by robots over the course of several weeks. An earlier version of this approach was already applied to robots that patrolled offices of a global security company (G4S).

Time-varying Pedestrian Flow Models for Service Robots

  • Autoři: Vintr, T., Molina, S., Senanayake, R., Broughton, G., Yan, Z., Ing. Jiří Ulrich, Kucner, T.P., Swaminathan, C.S., Majer, F., Stachová, M., Lilienthal, A.J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of European Conference on Mobile Robots. Prague: Czech Technical University, 2019. ISBN 978-1-7281-3605-9.
  • Rok: 2019
  • DOI: 10.1109/ECMR.2019.8870909
  • Odkaz: https://doi.org/10.1109/ECMR.2019.8870909
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a human-centric spatiotemporal model for service robots operating in densely populated environments for long time periods. The method integrates observations of pedestrians performed by a mobile robot at different locations and times into a memory efficient model, that represents the spatial layout of natural pedestrian flows and how they change over time. To represent temporal variations of the observed flows, our method does not model the time in a linear fashion, but by several dimensions wrapped into themselves. This representation of time can capture long-term (i.e. days to weeks) periodic patterns of peoples' routines and habits. Knowledge of these patterns allows making long-term predictions of future human presence and walking directions, which can support mobile robot navigation in human-populated environments. Using datasets gathered for several weeks, we compare the model to state-of-the-art methods for pedestrian flow modelling.

Two-Stage Approach for Long-Term Motivation of Children to Study Robotics

  • DOI: 10.1007/978-3-319-97085-1_14
  • Odkaz: https://doi.org/10.1007/978-3-319-97085-1_14
  • Pracoviště: Katedra řídicí techniky, Centrum umělé inteligence
  • Anotace:
    While activities aimed to attract the interest of secondary school students in robotics are common, activities designed to promote the interest of younger children are rather sparse. However, younger children from families with parents not working in technical domain have a little chance to be introduced to robotics entertainingly. To fill this gap, we propose a two-stage approach by organizing both programming and technology workshops for children by a volunteering group called “wITches”, followed by a robotic competition “Robosoutěž” aimed at children who are already familiar with basic concepts. We describe the proposed approach and investigate the effect of both stages on the number of students, their gender composition and their decisions of the field of study. The gathered data indicate that while the second, robotic competition stage is vital in persuading the children to proceed to study technology and robotics, the first, workshop stage is truly crucial to allow them to enter the field at all. In particular, for more than 70% of the participants, the workshops were the first opportunity to be introduced to robotics.

Vision techniques for on-board detection, following and mapping of moving targets

  • DOI: 10.1002/rob.21850
  • Odkaz: https://doi.org/10.1002/rob.21850
  • Pracoviště: Katedra kybernetiky, Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    This article presents computer vision modules of a multi-unmanned aerial vehicle (UAV) system, which scored gold, silver, and bronze medals at the Mohamed bin Zayed International Robotics Challenge (MBZIRC) 2017. This autonomous system, which was running completely on-board and in real-time, had to address two complex tasks in challenging outdoor conditions. In the first task, an autonomous UAV had to find, track, and land on a human-driven car moving at $15$~$km/h$ on a figure-eight-shaped track. During the second task, a group of three UAVs had to find small colored objects in a wide area, pick them up, and deliver them into a specified drop-off zone. The computer vision modules presented here achieved computationally efficient detection, accurate localization, robust velocity estimation, and reliable future position prediction of both the colored objects and the car. These properties had to be achieved in adverse outdoor environments with changing light conditions. Lighting varied from intense direct sunlight with sharp shadows cast over the objects by the UAV itself, to reduced visibility caused by overcast to dust and sand in the air. The results presented in this paper demonstrate good performance of the modules both during testing, which took place in the harsh desert environment of the central area of United Arab Emirates, as well as during the contest, which took place at a racing complex in the urban, near-sea location of Abu Dhabi. The stability and reliability of these modules contributed to the overall result of the contest, where our multi-UAV system outperformed teams from world-leading robotic laboratories in two challenging scenarios.

Warped Hypertime Representations for Long-term Autonomy of Mobile Robots

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Vintr, T., Molina, S., Fentanes, J.P., Cielniak, G., Mozos, O.M., Broughton, G., Duckett, T.
  • Publikace: IEEE Robotics and Automation Letters. 2019, 4(4), 3310-3317. ISSN 2377-3766.
  • Rok: 2019
  • DOI: 10.1109/LRA.2019.2926682
  • Odkaz: https://doi.org/10.1109/LRA.2019.2926682
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper presents a novel method for introducing time into discrete and continuous spatial representations used in mobile robotics, by modelling long-term, pseudo-periodic variations caused by human activities or natural processes. Unlike previous approaches, the proposed method does not treat time and space separately, and its continuous nature respects both the temporal and spatial continuity of the modeled phenomena. The key idea is to extend the spatial model with a set of wrapped time dimensions that represent the periodicities of the observed events. By performing clustering over this extended representation, we obtain a model that allows the prediction of probabilistic distributions of future states and events in both discrete and continuous spatial representations. We apply the proposed algorithm to several long-term datasets acquired by mobile robots and show that the method enables a robot to predict future states of representations with different dimensions. The experiments further show that the method achieves more accurate predictions than the previous state of the art.

A Practical Representation of Time for the Human Behaviour Modelling

  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper proposes a representation of the time domain intended for mobile robots which operate in human-populated environments.The method aims to identify and efficiently represent patterns of human habits, which are driven by periodic processes, such as the daily cycle.The core idea is to identify periodicities in the data observed by the robot and to project the time onto a series of circles, which represent the identified periodicities.This representation ensures that Euclidean distance of periodically-occurring events is low even if these events are temporally distant.This property allows to cluster events that occur at similar times of a day or similar days of a week etc. In the use-cases presented, we demonstrate that the method allows for temporally dependent anomaly detection and it can predictthefuture presence of people across large areas.The experiments indicate that the method detection reliability and prediction accuracy outperforms state-of-the-art tools used in statistical analysis for autonomous robots.

Artificial Intelligence for Long-Term Robot Autonomy: A Survey

  • Autoři: Kunze, L., Hawes, N., Ducket, T., Hanheide, M., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: IEEE Robotics and Automation Letters. 2018, 3(4), 4023-4030. ISSN 2377-3766.
  • Rok: 2018
  • DOI: 10.1109/LRA.2018.2860628
  • Odkaz: https://doi.org/10.1109/LRA.2018.2860628
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Autonomous systems will play an essential role in many applications across diverse domains including space, marine, air, field, road, and service robotics. They will assist us in our daily routines and perform dangerous, dirty, and dull tasks. However, enabling robotic systems to perform autonomously in complex, real-world scenarios over extended time periods (i.e., weeks, months, or years) poses many challenges. Some of these have been investigated by subdisciplines of Artificial Intelligence (AI) including navigation and mapping, perception, knowledge representation and reasoning, planning, interaction, and learning. The different subdisciplines have developed techniques that, when re-integrated within an autonomous system, can enable robots to operate effectively in complex, long-term scenarios. In this letter, we survey and discuss AI techniques as "enablers" for long-term robot autonomy, current progress in integrating these techniques within long-running robotic systems, and the future challenges and opportunities for AI in long-term autonomy.

Localization, Grasping, and Transportation of Magnetic Objects by a team of MAVs in Challenging Desert like Environments

  • DOI: 10.1109/LRA.2018.2800121
  • Odkaz: https://doi.org/10.1109/LRA.2018.2800121
  • Pracoviště: Centrum umělé inteligence, Multirobotické systémy
  • Anotace:
    Autonomous Micro Aerial Vehicles have the potential to assist in real life tasks involving grasping and transportation, but not before solving several difficult research challenges. In this work, we address the design, control, estimation, and planning problems for cooperative localization, grasping, and transportation of objects in challenging outdoor scenarios. We demonstrate an autonomous team of MAVs able to plan safe trajectories for manipulation of ferrous objects, while guaranteeing inter-robot collision avoidance and automatically creating a map of the objects in the environment. Our solution is predominantly distributed, allowing the team to pick and transport ferrous disks to a final destination without collisions. This result is achieved using a new magnetic gripper with a novel feedback approach, enabling the detection of successful grasping. The gripper design and all the components to build a platform are clearly provided as open-source hardware for reuse by the community. Finally, the proposed solution is validated through experimental results where difficulties include inconsistent wind, uneven terrain, and sandy conditions.

Modelling and predicting rhythmic flow patterns in dynamic environments

  • Autoři: Molina, S., Cielniak, G., doc. Ing. Tomáš Krajník, Ph.D., Duckett, T.
  • Publikace: Towards Autonomus Robotic Systems. Basel: Springer, 2018. p. 135-146. vol. LNAI 10965. ISSN 0302-9743. ISBN 978-3-319-96727-1.
  • Rok: 2018
  • DOI: 10.1007/978-3-319-96728-8_12
  • Odkaz: https://doi.org/10.1007/978-3-319-96728-8_12
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.

Navigation Without Localisation: Reliable Teach and Repeat Based on the Convergence Theorem

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Majer, F., Halodová, L., Vintr, T.
  • Publikace: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). New York: IEEE Press, 2018. p. 1657-1664. ISSN 2153-0866. ISBN 978-1-5386-8094-0.
  • Rok: 2018
  • DOI: 10.1109/IROS.2018.8593803
  • Odkaz: https://doi.org/10.1109/IROS.2018.8593803
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply “replay” the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at www.github.com/gestom/stroll_bearnav.

Perpetual Robot Swarm: Long-Term Autonomy of Mobile Robots Using On-the-fly Inductive Charging

  • Autoři: Arvin, F, Watson, S, Turgut, AE, Espinosa, Jose, doc. Ing. Tomáš Krajník, Ph.D., Lennox, B
  • Publikace: Journal of Intelligent and Robotic Systems. 2018, 88(273), 395-412. ISSN 0921-0296.
  • Rok: 2018
  • DOI: 10.1007/s10846-017-0673-8
  • Odkaz: https://doi.org/10.1007/s10846-017-0673-8
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Swarm robotics studies the intelligent collective behaviour emerging from long-term interactions of large number of simple robots. However, maintaining a large number of robots operational for long time periods requires significant battery capacity, which is an issue for small robots. Therefore, re-charging systems such as automated battery-swapping stations have been implemented. These systems require that the robots interrupt, albeit shortly, their activity, which influences the swarm behaviour. In this paper, a low-cost on-the-fly wireless charging system, composed of several charging cells, is proposed for use in swarm robotic research studies. To determine the system’s ability to support perpetual swarm operation, a probabilistic model that takes into account the swarm size, robot behaviour and charging area configuration, is outlined. Based on the model, a prototype system with 12 charging cells and a small mobile robot, Mona, was developed. A series of long-term experiments with different arenas and behavioural configurations indicated the model’s accuracy and demonstrated the system’s ability to support perpetual operation of multi-robotic system.

phi Clust: Pheromone-based Aggregation for Robotic Swarms

  • Autoři: Arvin, F., Turgut, A.E., doc. Ing. Tomáš Krajník, Ph.D., Rahimi, S., Okay, I.E., Yue, S., Watson, S., Lennox, B.
  • Publikace: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). New York: IEEE Press, 2018. p. 4288-4294. ISSN 2153-0866. ISBN 978-1-5386-8094-0.
  • Rok: 2018
  • DOI: 10.1109/IROS.2018.8593961
  • Odkaz: https://doi.org/10.1109/IROS.2018.8593961
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper, we proposed a pheromone-based aggregation method based on the state-of-the-art BEECLUST algorithm. We investigated the impact of pheromone-based communication on the efficiency of robotic swarms to locate and aggregate at areas with a given cue. In particular, we evaluated the impact of the pheromone evaporation and diffusion on the time required for the swarm to aggregate. In a series of simulated and real-world evaluation trials, we demonstrated that augmenting the BEECLUST method with artificial pheromone resulted in faster aggregation times.

3D-Vision Based Detection, Localisation and Sizing of Broccoli Heads in the Field

  • Autoři: Kusumam, K, doc. Ing. Tomáš Krajník, Ph.D., Pearson, S., Duckett, Tom, Cielniak, Grzegorz
  • Publikace: Journal of Field Robotics. 2017, 34(8), 1505-1518. ISSN 1556-4959.
  • Rok: 2017
  • DOI: 10.1002/rob.21726
  • Odkaz: https://doi.org/10.1002/rob.21726
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper describes a 3D vision system for robotic harvesting of broccoli using low-cost RGB-D sensors, which was developed and evaluated using sensory data collected under real-world field conditions in both the UK and Spain. The presented method addresses the tasks of detecting mature broccoli heads in the field and providing their 3D locations relative to the vehicle. The paper evaluates different 3D features, machine learning and temporal filtering methods for detection of broccoli heads. Our experiments show that a combination of Viewpoint Feature Histograms, Support Vector Machine classifier and a temporal filter to track the detected heads results in a system that detects broccoli heads with high precision. We also show that the temporal filtering can be used to generate a 3D map of the broccoli head positions in the field. Additionally we present methods for automatically estimating the size of the broccoli heads, to determine when a head is ready for harvest. All of the methods were evaluated using ground-truth data from both the UK and Spain, which we also make available to the research community for subsequent algorithm development and result comparison. Cross-validation of the system trained on the UK dataset on the Spanish dataset, and vice versa, indicated good generalisation capabilities of the system, confirming the strong potential of low-cost 3D imaging for commercial broccoli harvesting.

A Precise Teach and Repeat Visual Navigation System Based on the Convergence Theorem

  • Autoři: Majer, F., Halodová, L., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: STUDENT CONFERENCE ON PLANNING IN ARTIFICIAL INTELLIGENCE AND ROBOTICS (PAIRS). Praha: České vysoké učení technické v Praze, 2017.
  • Rok: 2017
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a simple teach-and-repeat visual navigation method robust to appearance changes induced by varying illumination and naturally-occurring environment changes. The method is computationally efficient, it does not require camera calibration and it can learn and autonomously traverse arbitrarily-shaped paths. During the teaching phase, where the robot is driven by a human operator, the robot stores its velocities and image features visible from its on-board camera. During autonomous navigation, the method does not perform explicit robot localisation in the 2d/3d space but it simply replays the velocities that it learned during a teaching phase, while correcting its heading relatively to the path based on its camera data. The experiments performed indicate that the proposed navigation system corrects position errors of the robot as it moves along the path. Therefore, the robot can repeatedly drive along the desired path, which was previously taught by the human operator. The presented system, which is based on a method that won the RoboTour challenge in 2009 and 2008, is provided as open source at www.github.com/gestom/ stroll_bearnav.

A Versatile High-Performance Visual Fiducial Marker Detection System with Scalable Identity Encoding

  • Autoři: Lightbody, P., doc. Ing. Tomáš Krajník, Ph.D., Hanheide, M.
  • Publikace: Proceedings of the Symposium on Applied Computing. New York: ACM, 2017. p. 276-282. ISBN 978-1-4503-4486-9.
  • Rok: 2017
  • DOI: 10.1145/3019612.3019709
  • Odkaz: https://doi.org/10.1145/3019612.3019709
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Fiducial markers have a wide field of applications in robotics, ranging from external localisation of single robots or robotic swarms, over self-localisation in marker-augmented environments, to simplifying perception by tagging objects in a robot’s surrounding. We propose a new family of circular markers allowing for a computationally efficient detection, identification and full 3D position estimation. A key concept of our system is the separation of the detection and identification steps, where the first step is based on a computationally efficient circular marker detection, and the identification step is based on an open-ended ‘Necklace code’, which allows for a theoretically infinite number of individually identifiable markers. The experimental evaluation of the system on a real robot indicates that while the proposed algorithm achieves similar accuracy to other state-of-the-art methods, it is faster by two orders of magnitude and it can detect markers from longer distances

An Efficient Visual Fiducial Localisation System

  • Autoři: Lighbody, P., doc. Ing. Tomáš Krajník, Ph.D., Hanheide, M.
  • Publikace: ACM SIGAPP Applied Computing Review. 2017, 17(3), 28-38. ISSN 1559-6915.
  • Rok: 2017
  • DOI: 10.1145/3019612.3019709
  • Odkaz: https://doi.org/10.1145/3019612.3019709
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    With use cases that range from external localisation of single robots or robotic swarms to self-localisation in marker augmented environments and simplifying perception by tagging objects in a robot’s surrounding, fiducial markers have a wide field of application in the robotic world. We propose a new family of circular markers which allow for both computationally efficient detection, tracking and identification and full 6D position estimation. At the core of the proposed approach lies the separation of the detection and identification steps, with the former using computationally efficient circular marker detection and the latter utilising an open-ended ‘necklace encoding’, allowing scalability to a large number of individual markers. While the proposed algorithm achieves similar accuracy to other state-of-the-art methods, its experimental evaluation in realistic conditions demonstrates that it can detect markers from larger distances while being up to two orders of magnitude faster than other stateof-the art fiducial marker detection methods. In addition, the entire system is available as an open-source package at https://github.com/LCAS/whycon.

Exposure Setting for Visual Navigation of Mobile Robots

  • Autoři: Halodová, L., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: STUDENT CONFERENCE ON PLANNING IN ARTIFICIAL INTELLIGENCE AND ROBOTICS (PAIRS). Praha: České vysoké učení technické v Praze, 2017.
  • Rok: 2017
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    This paper is concerned with adaptive exposure control for visual teach-and-repeat navigation systems based on point image features. The robustness and accuracy of these systems relies on their ability to associate the currently visible image features to the ones in their maps and thus, they depend on the quality of the images, from which are these features extracted. In this paper, we evaluate the effect of the camera exposure setting and hessian threshold of the feature detectors on the quality of the image features’ correspondences between images taken from a similar location under a slightly different viewpoint. In the experiments performed, we show that trying to set these parameters in a way which maximises the number of features extracted causes a rapid decrease in the ability to associate the features correctly. Thus, we argue that one should aim to set both the exposure and hessian threshold in a way, which makes the feature detectors to provide a lower number of features as these tend to provide better information for visual navigation methods.

Fremen: Frequency map enhancement for long-term mobile robot autonomy in changing environments

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Fentanes, J.P., Santos, J.M., Duckett, T.
  • Publikace: IEEE Transactions on Robotics. 2017, 33(4), 964-977. ISSN 1552-3098.
  • Rok: 2017
  • DOI: 10.1109/TRO.2017.2665664
  • Odkaz: https://doi.org/10.1109/TRO.2017.2665664
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a new approach to long-term mobile robot mapping in dynamic indoor environments. Unlike traditional world models that are tailored to represent static scenes, our approach explicitly models environmental dynamics. We assume that some of the hidden processes that influence the dynamic environment states are periodic and model the uncertainty of the estimated state variables by their frequency spectra. The spectral model can represent arbitrary timescales of environment dynamics with low memory requirements. Transformation of the spectral model to the time domain allows for the prediction of the future environment states, which improves the robot’s long-term performance in dynamic environments. Experiments performed over time periods of months to years demonstrate that the approach can efficiently represent large numbers of observations and reliably predict future environment states. The experiments indicate that the model’s predictive capabilities improve mobile robot localisation and navigation in changing environments.

Spatiotemporal Models for Motion Planning in Human Populated Environments

  • Autoři: Vintr, T., Molina, S., Cielniak, G., Duckett, T., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: STUDENT CONFERENCE ON PLANNING IN ARTIFICIAL INTELLIGENCE AND ROBOTICS (PAIRS). Praha: České vysoké učení technické v Praze, 2017.
  • Rok: 2017
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    In this paper we present effective spatiotemporal model for motion planning computed in the temporary warp space-hypertime continuum. Such model is suitable for robots that are expected to be helpful to the humans in their natural environment. This method allows to capture natural periodicities of human behavior by adding another time dimensions. The created hyperspace represents structure of the human habits above the space and can be analyzed using regular analytical methods. We will prove this concept by a qualitative methods popular among humans.

The when, where, and how: an adaptive robotic info-terminal for care home residents–a long-term study

  • Autoři: Hanheide, M, Hebesberger, D., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of the 2017 Conference on Human-Robot Interaction. New York: ACM, 2017. p. 341-349. ISSN 2167-2148. ISBN 978-1-4503-4336-7.
  • Rok: 2017
  • DOI: 10.1145/2909824.3020228
  • Odkaz: https://doi.org/10.1145/2909824.3020228
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    Adapting to users’ intentions is a key requirement for autonomous robots in general, and in care settings in particular. In this paper, a comprehensive long-term study of a mobile robot providing information services to residents, visitors, and staff of a care home is presented, with a focus on adapting to the when and where the robot should be offering its services to best accommodate the users’ needs. Rather than providing a fixed schedule, the presented system takes the opportunity of long-term deployment to explore the space of possibilities of interaction while concurrently exploiting the model learned to provide better services. But in order to provide effective services to users in a care home, not only then when and where are relevant, but also the way how the information is provided and accessed. Hence, also the usability of the deployed system is studied specifically, in order to provide a most comprehensive overall assessment of a robotic info-terminal implementation in a care setting. Our results back our hypotheses, (i) that learning a spatio-temporal model of users’ intentions improves efficiency and usefulness of the system, and (ii) that the specific information sought after is indeed dependent on the location the info-terminal is offered.

Towards Automated Benchmarking of Robotic Experiments

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Hanheide, M., Vintr, T., Kusumam, K., Duckett, T.
  • Publikace: Reproducible Research in Robotics: Current Status and Road Ahead (Workshop at ICRA 2017). Castellon: Department of Engineering and Computer Science, Universitat Jaume, 2017.
  • Rok: 2017
  • Pracoviště: Centrum umělé inteligence
  • Anotace:
    We present a system for automated benchmarking of robotics experiments. The system is based on open source, freely-available tools commonly used in software development. While it allows for a seamless and fair comparison of a newly developed method with the original one, it does not require disclosure of neither the original codes nor evaluation datasets. Apart from the description of the system, we provide two use cases, where the researchers were able to compare their methods to the original ones in a matter of minutes without running the method on their own hardware.

Hybrid Vision-Based Navigation for Mobile Robots in Mixed Indoor/Outdoor Environments

  • Autoři: de Crostoforis, P., Nitsche, M., doc. Ing. Tomáš Krajník, Ph.D., Pire, T., Mejail, M.
  • Publikace: Pattern Recognition Letters. 2015, 53(1), 118-128. ISSN 0167-8655.
  • Rok: 2015
  • DOI: 10.1016/j.patrec.2014.10.010
  • Odkaz: https://doi.org/10.1016/j.patrec.2014.10.010
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper we present a vision-based navigation system for mobile robots equipped with a single, off-the-shelf camera in mixed indoor/outdoor environments. A hybrid approach is proposed, based on the teach-and-replay technique, which combines a path-following and a feature-based navigation algorithm. We describe the navigation algorithms and show that both of them correct the robot's lateral displacement from the intended path. After that, we claim that even though neither of the methods explicitly estimates the robot position, the heading corrections themselves keep the robot position error bound. We show that combination of the methods outperforms the pure feature-based approach in terms of localization precision and that this combination reduces map size and simplifies the learning phase. Experiments in mixed indoor/outdoor environments were carried out with a wheeled and a tracked mobile robots in order to demonstrate the validity and the benefits of the hybrid approach.

A Cognitive Architecture for Modular and Self-Reconfigurable Robots

  • DOI: 10.1109/SysCon.2014.6819298
  • Odkaz: https://doi.org/10.1109/SysCon.2014.6819298
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The field of reconfigurable swarms of modular robots has achieved a current status of performance that allows applications in diverse fields that are characterized by human support (e.g. exploratory and rescue tasks) or even in human-less environments. The main goal of the EC project REPLICATOR [1] is the development and deployment of a heterogeneous swarm of modular robots that are able to switch autonomously from a swarm of robots, into different organism forms, to reconfigure hese forms, and finally to revert to the original swarm mode [2]. To achieve these goals three different types of robot modules have been developed and an extensive suite of embodied distributed cognition methods implemented [3]. Hereby the methodological key aspects address principles of self-organization. In order to tackle our ambitious approach a Grand Challenge has been proposed of autonomous operation of 100 robots for 100 days (100 days, 100 robots). Moreover, a framework coined the SOS-cycle (SOS: Swarm-Organism-Swarm) is developed. It controls the transitions between internal phases that enable the whole system to alternate between different modes mentioned above. This paper describes the vision of the Grand Challenge and the implementation and the results of the different phases of the SOS-cycle.

A Practical Multirobot Localization System

  • DOI: 10.1007/s10846-014-0041-x
  • Odkaz: https://doi.org/10.1007/s10846-014-0041-x
  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. In addition, we present the method’s mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at http://purl.org/robotics/whycon; so, it can be used as an enabling technology for various mobile robotic problems.

Coordination and Navigation of Heterogeneous MAV–UGV Formations Localized by a ‘hawk-eye’-like Approach Under a Model Predictive Control Scheme

  • DOI: 10.1177/0278364914530482
  • Odkaz: https://doi.org/10.1177/0278364914530482
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    An approach for coordination and control of 3D heterogeneous formations of unmanned aerial and ground vehicles under hawk-eye like relative localization is presented in this paper. The core of the method lies in the use of visual top-view feedback from flying robots for the stabilization of the entire group in a leader-follower formation. We formulate a novel Model Predictive Control (MPC) based methodology for guiding the formation. The method is employed to solve the trajectory planning and control of a virtual leader into a desired target region. In addition, the method is used for keeping the following vehicles in the desired shape of the group. The approach is designed to ensure direct visibility between aerial and ground vehicles, which is crucial for the formation stabilization using the hawk-eye like approach. The presented system is verified in numerous experiments inspired by search and rescue applications, where the formation acts as a searching phalanx. In addition, stability and convergence analyses are provided to explicitly determine the limitations of the method in real-world applications.

Fault-Tolerant Formation Driving Mechanism Designed for Heterogeneous MAVs-UGVs Groups

  • DOI: 10.1007/s10846-013-9976-6
  • Odkaz: https://doi.org/10.1007/s10846-013-9976-6
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper.

FPGA-Based Module for SURF Extraction

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Šváb, J., Pedre, S., Čížek, P., Přeučil, L.
  • Publikace: Machine Vision and Applications. 2014, 25(3), 787-800. ISSN 0932-8092.
  • Rok: 2014
  • DOI: 10.1007/s00138-014-0599-0
  • Odkaz: https://doi.org/10.1007/s00138-014-0599-0
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots.

Joint Localization of Pursuit Quadcopters and Target Using Monocular Cues

  • Autoři: Basit, A., Qureshi, W.S., Dailey, M.N., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Journal of Intelligent and Robotic Systems. 2014, 2014(Issue 3-4), 613-630. ISSN 0921-0296.
  • Rok: 2014
  • DOI: 10.1007/s10846-014-0081-2
  • Odkaz: https://doi.org/10.1007/s10846-014-0081-2
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Pursuit robots (autonomous robots tasked with tracking and pursuing a moving target) require accurate tracking of the target’s position over time. One possibly effective pursuit platform is a quadcopter equipped with basic sensors and a monocular camera. However, the combined noise in the quadcopter’s sensors causes large disturbances in the target’s 3D position estimate. To solve this problem, in this paper, we propose a novel method for joint localization of a quadcopter pursuer with a monocular camera and an arbitrary target. Our method localizes both the pursuer and target with respect to a common reference frame. The joint localization method fuses the quadcopter’s kinematics and the target’s dynamics in a joint state space model. We show that predicting and correcting pursuer and target trajectories simultaneously produces better results than standard approaches to estimating relative target trajectories in a 3D coordinate system. Our method also comprises a computationally efficient visual tracking method capable of redetecting a temporarily lost target. The efficiency of the proposed method is demonstrated by a series of experiments with a real quadcopter pursuing a human. The results show that the visual tracker can deal effectively with target occlusions and that joint localization outperforms standard localization methods.

Accelerating Embedded Image Processing for Real Time: A Case Study

  • Autoři: Pedre, S., doc. Ing. Tomáš Krajník, Ph.D., Todorovich, E., Borensztejn, P.
  • Publikace: Journal of Real-Time Image Processing. 2013, 2013(1), 1-26. ISSN 1861-8200.
  • Rok: 2013
  • DOI: 10.1007/s11554-013-0353-2
  • Odkaz: https://doi.org/10.1007/s11554-013-0353-2
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Many image processing applications need real-time performance, while having restrictions of size, weight and power consumption. Common solutions, including hardware/software co-designs, are based on Field Programmable Gate Arrays (FPGAs). Their main drawback is long development time. In this work, a co-design methodology for processor-centric embedded systems with hardware acceleration using FPGAs is proposed. The goal of this methodology is to achieve real-time embedded solutions, using hardware acceleration, but achieving development time similar to that of software projects. Well established methodologies, techniques and languages from the software domain—such as Object-Oriented Paradigm design, Unified Modelling Language, and multithreading programming—are applied; and semiautomatic C-to-HDL translation tools and methods are used and compared. The methodology is applied to achieve an embedded implementation of a global vision algorithm for the localization of multiple robots in an e-learning robotic laboratory. The algorithm is specifically developed to work reliably 24/7 and to detect the robot’s positions and headings even in the presence of partial occlusions and varying lighting conditions expectable in a normal classroom. The co-designed implementation of this algorithm processes 1,600 × 1,200 pixel images at a rate of 32 fps with an estimated energy consumption of 17 mJ per frame. It achieves a 16× acceleration and 92 % energy saving, which compares favorably with the most optimized embedded software solutions. This case study shows the usefulness of the proposed methodology for embedded real-time image processing applications.

External Localization System for Mobile Robotics

  • DOI: 10.1109/ICAR.2013.6766520
  • Odkaz: https://doi.org/10.1109/ICAR.2013.6766520
  • Pracoviště: Katedra počítačů
  • Anotace:
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the proposed localization system is an efficient method for black and white circular pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision, and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost camera, its core algorithm is able to process hundreds of images per second while tracking hundreds of objects with millimeter precision. We propose a mathematical model of the method that allows to calculate its precision, area of coverage, and processing speed from the camera’s intrinsic parameters and hardware’s processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions are verified in several experiments. Apart from the method description, we also publish its source code; so, it can be used as an enabling technology for various mobile robotics problems.

Low-Cost Embedded System for Relative Localization in Robotic Swarms

  • DOI: 10.1109/ICRA.2013.6630694
  • Odkaz: https://doi.org/10.1109/ICRA.2013.6630694
  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    In this paper, we present a small, light-weight, low-cost, fast and reliable system designed to satisfy requirements of relative localization within a swarm of micro aerial vehicles. The core of the proposed solution is based on off-the-shelf components consisting of the Caspa camera module and Gumstix Overo board accompanied by a developed efficient image processing method for detecting black and white circular patterns. Although the idea of the roundel recognition is simple, the developed system exhibits reliable and fast estimation of the relative position of the pattern up to 30 fps using the full resolution of the Caspa camera. Thus, the system is suited to meet requirements for a vision based stabilization of the robotic swarm. The intent of this paper is to present the developed system as an enabling technology for various robotic tasks.

Navigation, Localization and Stabilization of Formations of Unmanned Aerial and Ground Vehicles

  • DOI: 10.1109/ICUAS.2013.6564767
  • Odkaz: https://doi.org/10.1109/ICUAS.2013.6564767
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    A leader-follower formation driving algorithm developed for control of heterogeneous groups of unmanned micro aerial and ground vehicles stabilized under a top-view relative localization is presented in this paper. The core of the proposed method lies in a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the group. Such a representation of the formation provides non-collision trajectories of the robots and respects requirements of the direct visibility between the team members in environment with static as well as dynamic obstacles, which is crucial for the top-view localization. The algorithm is suited for utilization of a simple yet stable visual based navigation of the group (referred to as GeNav), which together with the on-board relative localization enables deployment of large teams of micro-scale robots in environments without any available global localization system. We formulate a novel Model Predictive Control (MPC) based concept that enables to respond to the changing environment and that provides a robust solution with team members' failure tolerance included. The performance of the proposed method is verified by numerical and hardware experiments inspired by reconnaissance and surveillance missions.

Real-Time Monocular Image-Based Path Detection

  • Autoři: Cristoforis, P., Nitsche, M.A., doc. Ing. Tomáš Krajník, Ph.D., Mehail, M.
  • Publikace: Journal of Real-Time Image Processing. 2013, 2013(1), 27-40. ISSN 1861-8200.
  • Rok: 2013
  • DOI: 10.1007/s11554-013-0356-z
  • Odkaz: https://doi.org/10.1007/s11554-013-0356-z
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this work, we present a new real-time image-based monocular path detection method. It does not require camera calibration and works on semi-structured outdoor paths. The core of the method is based on segmenting images and classifying each super-pixel to infer a contour of navigable space. This method allows a mobile robot equipped with a monocular camera to follow different naturally delimited paths. The contour shape can be used to calculate the forward and steering speed of the robot. To achieve real-time computation necessary for on-board execution in mobile robots, the image segmentation is implemented on a low-power embedded GPU. The validity of our approach has been verified with an image dataset of various outdoor paths as well as with a real mobile robot.

SyRoTek - Distance Teaching of Mobile Robotics

  • DOI: 10.1109/TE.2012.2224867
  • Odkaz: https://doi.org/10.1109/TE.2012.2224867
  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    E-learning is a modern and effective approach for training in various areas and at different levels of education. This paper gives an overview of SyRoTek - an e-learning platform for mobile robotics, artificial intelligence, control engineering and related domains. SyRoTek provides remote access to a set of fully autonomous mobile robots placed in a restricted area with dynamically reconfigurable obstacles, which enables solving a huge variety of problems. A user is able to control the robots in real-time by their own developed algorithms as well as being able to analyze gathered data and observe activity of the robots by provided interfaces. The system is currently used for education at the Czech Technical University in Prague and at the University of Buenos Aires, and it is freely accessible to other institutions. In addition to the system overview, the paper presents the experience gained from the actual deployment of the system in teaching activities.

A Co-Design Methodology for Processor-Centric Embedded Systems with Hardware Acceleration Using FPGA

  • Autoři: Pedre, S., doc. Ing. Tomáš Krajník, Ph.D., Todorovich, E., Borensztejn, P.
  • Publikace: Proceedings of the 3th Southern Programmable Logic Conference. Piscataway: IEEE, 2012, pp. 7-14. ISBN 978-1-4673-0185-5.
  • Rok: 2012
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this work a co-design flow for processor centric embedded systems with hardware acceleration using FPGAs is proposed. This flow helps to reduce design effort by raising abstraction level while not imposing the need for engineers to learn new languages and tools. The whole system is designed using well established high level modeling techniques, languages and tools from the software domain. That is, an OOP design approach expressed in UML and implemented in C++. Software coding effort is reduced since the C++ implementation not only provides a golden reference model, but may also be used as part of the final embedded software. Hardware coding effort is also reduced. The modular OOP design facilitates the engineer to ind the exact methods that need to be accelerated by hardware using profiling tools, preventing useless translations to hardware. Moreover, the two-process structured VHDL design method used for hardware implementation has proven to reduce man-years, code lines and bugs in many major developments. A real-time image processing application for multiple robot localization is presented as a case study. The overall time improvement from the original software solution to the final hardware accelerated solution is 9:7x, with only 4% increase in area (143 extra slices). The embedded solution achieved following the proposed methodology runs 17% faster than in a standard PC, and it is a much smaller, cheaper and less power-consuming solution.

A Simple Visual Navigation System for an UAV

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Nitsche, M., Přeučil, L., Pedre, S., Mejail, M.
  • Publikace: International Multi-Conference on Systems, Signals and Devices. Piscataway: IEEE, 2012, pp. 34. ISBN 978-3-9814766-1-3.
  • Rok: 2012
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present a simple and robust monocular camera-based navigation system for an autonomous quadcopter. The method does not require any additional infrastructure like radio beacons, artificial landmarks or GPS and can be easily combined with other navigation methods and algorithms. Its computational complexity is independent of the environment size and it works even when sensing only one landmark at a time, allowing its operation in landmark poor environments. We also describe an FPGA based embedded realization of the method's most computationally demanding phase.

Advanced Methods for UAV Autonomy

  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    This paper presents advanced technologies for Micro Unmanned Aerial Vehicles (µ-UAVs) developed by the Intelligent and Mobile Robotics Group of the Czech Technical University in Prague. These methods, based on artificial intelligence, allow the µ-UAVs to fly fully autonomously with a minimal required interaction with an operator. The paper aims to show an applicability of the developed methods in tasks of autonomous navigation, periodical surveillance, inspection, ground unit localization, and mapping. The main intention of this contribution is to demonstrate technologies that can extend the operational deployment of today's u-UAV systems.

Cooperative Micro UAV-UGV Autonomous Indoor Surveillance

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we present a heterogenous UGV-UAV system cooperatively solving tasks of periodical surveillance in indoor environments. In the proposed scenario, the UGV is equipped with an interactive helipad and it acts as a carrier of the UAV. The UAV is a light-weight quadro-rotor helicopter equipped with two cameras, which are used to inspect locations inaccessible for the UGV. The paper is focused on the most crucial aspects of the proposed UAV-UGV periodical surveillance that are visual navigation, localization and autonomous landing that need to be done periodically. We propose two concepts of mobile helipads employed for correction of imprecise landing of the UAV. Beside the description of the visual navigation, relative localization and both helipads, a study of landing performance is provided. The performance of the complex system is proven by an experiment of autonomous periodical surveillance in a changing environment with presence of people.

Coordination and Navigation of Heterogeneous UAVs-UGVs Teams Localized by a Hawk-Eye Approach

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    A navigation and stabilization scheme for 3D heterogeneous (UAVs and UGVs) formations acting under a hawk-eye like relative localization is presented in this paper. We formulate a novel Model Predictive Control (MPC) based concept for formation driving in a leader-follower constellation into a required target region. The formation to target region problem in 3D is solved using the MPC methodology for both: i) the trajectory planning and control of a virtual leader, and ii) the control and stabilization of followers - UAVs and UGVs. The core of the method lies in a novel avoidance function based on a model of the formation respecting requirements of the direct visibility between the team members in environment with obstacles, which is crucial for the hawk-eye localization.

Hardware/Software Co-design for Real Time Embedded Image Processing: A Case Study

  • Autoři: Pedre, S., doc. Ing. Tomáš Krajník, Ph.D., Todorovich, E., Borensztejn, P.
  • Publikace: Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Heidelberg: Springer, 2012, pp. 599-606. ISSN 0302-9743. ISBN 978-3-642-33274-6. Available from: http://www.springerlink.com/content/p81630266123q2tu/
  • Rok: 2012
  • DOI: 10.1007/978-3-642-33275-3_74
  • Odkaz: https://doi.org/10.1007/978-3-642-33275-3_74
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Many image processing applications need real time performance, while having restrictions of size, weight and power consumption. These include a wide range of embedded systems from remote sensing applications to mobile phones. FPGA-based solutions are common for these applications, their main drawback being long development time. In this work a co-design methodology for processor-centric embedded systems with hardware acceleration using FPGAs is applied to an image processing method for localization of multiple robots. The goal of the methodology is to achieve a real-time embedded solution using hardware acceleration, but with development time similar to software projects. The final embedded co-designed solution processes 1600×1200 pixel images at a rate of 25 fps, achieving a 12.6× acceleration from the original software solution. This solution runs with a comparable speed as up-to-date PC-based systems, and it is smaller, cheaper and demands less power.

Low Cost MAV Platform AR-Drone in Experimental Verifications of Methods for Vision Based Autonomous Navigation

  • DOI: 10.1109/IROS.2012.6386277
  • Odkaz: https://doi.org/10.1109/IROS.2012.6386277
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Several navigation tasks utilizing a low-cost Micro Aerial Vehicle (MAV) platform AR-drone are presented in this paper to show how it can be used in an experimental verification of scientific theories and developed methodologies. An important part of this paper is an attached video showing a set of such experiments. The presented methods rely on visual navigation and localization using on-board cameras of the AR-drone employed in the control feedback. The aim of this paper is to demonstrate flight performance of this platform in real world scenarios of mobile robotics.

On Localization Uncertainty in an Autonomous Inspection

  • DOI: 10.1109/ICRA.2012.6224706
  • Odkaz: https://doi.org/10.1109/ICRA.2012.6224706
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper presents a multi-goal path planning framework based on a self-organizing map algorithm and a model of the navigation describing evolution of the localization error. The framework combines finding a sequence of goals' visits with a goal-to-goal path planning considering localization uncertainty. The approach is able to deal with local properties of the environment such as expected visible landmarks usable for the navigation. The local properties affect the performance of the navigation, and therefore, the framework can take the full advantage of the local information together with the global sequence of the goals' visits to find a path improving the autonomous navigation. Experimental results in real outdoor and indoor environments indicate that the framework provides paths that effectively decreases the localization uncertainty; thus, increases the reliability of the autonomous goals' visits.

Pokročilé metody řízení autonomních bezpilotních prostředků

  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    Tento článek prezentuje pokročilé metody řízení bezpilotních prostředků (UAV) vyvinuté skupinou inteligentní a mobilní robotiky Českého vysokého učení technického v Praze. Tyto metody, založené na technikách umělé inteligence, umožňují bezpilotním prostředkům plnit řadu úkolů zcela samostatně s minimální nezbytnou interakcí s operátorem. V článku jsou uvedeny příklady aplikací těchto metod v úlohách autonomní navigace, inspekce, dohledu, lokalizace a mapování. Hlavním záměrem příspěvku je demonstrace technologií umožňujících rozšířit nasazení malých bezpilotních prostředků v rozličných, zejména dohledových, průzkumných a záchranných misích.

SyRoTek - systém pro robotickou televýuku a experimentování

  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    Tento článek popisuje SyRoTek - platformu pro vzdálené vzdělávání v oblastech mobilní robotiky, umělé inteligence, řízení a příbuzných oborech a pro provádění experimentů s reálnými roboty a senzory. SyRoTek poskytuje přístup ke skupině plně autonomních robotů operujících v omezeném prostoru s automaticky regulovatelnými překážkami a tím umožňuje řešení rozličných úloh. Systém byl navržen se speciálním zaměřením na dlouhodobé a intenzivní používání tak, aby byl dostupný v režimu 24/7. Uživatelé mohou nejenom sledovat a zpracovávat reálná data získaná ze senzorů, ale zejména řidit roboty v reálném čase vlastními aplikacemi. Systém je aktivně využíván pro výzkum I výuku na ČVUT v Praze a na Universidad de Buenos Aires. Mimoto je volně přístupný ostatním zájemcům, ať již jednotlivcům či institucím.

Techniques for Modeling Simulation Environments for Modular Robotics

  • DOI: 10.3182/20120215-3-AT-3016.00037
  • Odkaz: https://doi.org/10.3182/20120215-3-AT-3016.00037
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In modular robotics, complex structures can be formed from basic modules to solve tasks which would be difficult for a single robot. The development of techniques for adaptation and evolution of multi-robot organisms is the subject of Symbrion project. In the project, the bio-inspired evolutionary algorithms are massively simulated prior to run them on a real hardware. It is crucial to evolve behaviors of the robots in a simulation, that is close to a real world. Hence, accurate and efficient representation of an environment in the simulation is needed. Here, the environment is modeled using set of 3D objects (usually triangle meshes). The robots learn simple motion primitives or complex movement patterns during many runs of the evolution. The learned skills are then used during experiments with a real hardware. In this paper, we present methods for building 3D model of a real arena using a laser rangefinder. The resulting 3D models consist of triangles. They can be constructed in various level of details using state-of-the-art methods for 3D reconstruction. We will show, how the size of the models influences the speed of the simulation.

A Sampling Schema for Rapidly Exploring Random Trees Using a Guiding Path

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, a novel sampling schema for Rapidly Exploring Random Trees (RRT) is proposed to address the narrow passage issue. The introduced method employs a guiding path to steer the tree growth towards a given goal. The main idea of the proposed approach stands in a preference of the sampling of the configuration space C along a given guiding path instead of sampling of the whole space. While for a low-dimensional C the guiding path can be found as a geometric path in the robot workspace, such a path does not provide useful information for efficient sampling of a high-dimensional C. We propose an iterative scaling approach to find a guiding path in such high-dimensional configuration spaces. The approach starts with a scaled geometric model of the robot to a fraction of its original size for which a guiding path is found using the RRT algorithm. Then, such a path is iteratively used in the proposed RRT-Path algorithm for a larger robot up to its original size

A TECHNICAL SOLUTION OF A ROBOTIC E-LEARNING SYSTEM IN THE SYROTEK PROJECT

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    SyRoTek (a system for a robotic e-learning) is a robotic virtual laboratory being developed at Czech Technical University in Prague. SyRoTek provides access to real mobile robots placed in an arena with dynamically reconfigurable obstacles enabling variety of tasks in the field of mobile robotics and artificial intelligence. The robots are equipped with several sensors allowing students to realize how robots' perception works and how to deal with uncertainties of the real world. An insight to a technical solution of the SyRoTek project is presented in this paper.

An Omnidirectional Mobile Robot for Large Object Handling

  • Autoři: Mudrová, L., Jahoda, V., Porges, O., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: RESEARCH AND EDUCATION IN ROBOTICS: EUROBOT 2011. Heidelberg: Springer, 2011, pp. 210-220. ISSN 1865-0929. ISBN 978-3-642-21974-0. Available from: http://www.springerlink.com/content/u2r6106584758800/fulltext.pdf
  • Rok: 2011
  • DOI: 10.1007/978-3-642-21975-7_19
  • Odkaz: https://doi.org/10.1007/978-3-642-21975-7_19
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The purpose of this paper is to introduce an autonomous mobile robot for object manipulation that is the same size as the robot. The construction has to comply with the rules of Eurobot competition. We will provide an in-detail description of the omniwheel undercart, its motion and object manipulator. This paper also provides a small insight on the robot's planned intelligence and its vision subsystem.

AR Drone as a Platform for Robotic Research and Education

  • DOI: 10.1007/978-3-642-21975-7_16
  • Odkaz: https://doi.org/10.1007/978-3-642-21975-7_16
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper presents the AR-Drone quadrotor helicopter as a robotic platform usable for research and education. Apart from the description of hardware and software, we discuss several issues regarding drone equipment, abilities and performance. We show, how to perform basic tasks of position stabilization, object following and autonomous navigation. Moreover, we demonstrate the drone ability to act as an external navigation system for a formation of mobile robots. To further demonstrate the drone utility for robotic research, we describe experiments in which the drone has been used. We also introduce a freely available software package, which allows researches and students to quickly overcome the initial problems and focus on more advanced issues.

Bringing Reality to Evolution of Modular Robots: Bio-Inspired Techniques for Building a Simulation Environment in the SYMBRION Project

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Two neural network (NN) based techniques for building a simulation environment, which is employed for evolution of modular robots in the Symbrion project, are presented in this paper. The methods are able to build models of real world with variable accuracy and amount of stored information depending upon the performed tasks and evolutionary processes. In this paper, the presented algorithms are verified via experiments in scenarios designed to demonstrate the process of autonomous creation of complex artificial organisms. Performance of the methods is compared in experiments with real data and their employability in modular robotics is discussed. Beside these, the entire process of environment data acquisition and pre-processing during the real evolutionary experiments in the Symbrion project will be briefly described.

Estimation of Mobile Robot Pose from Optical Mouses

  • Autoři: Mudrová, L., prof. Ing. Jan Faigl, Ph.D., Halgašík, J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Eurobot Conference 2010, International Conference on Research and Education in Robotics. Bern: University of Applied Sciences, 2011, pp. 93-107. ISSN 1865-0929. ISBN 978-3-642-27271-4.
  • Rok: 2011
  • DOI: 10.1007/978-3-642-27272-1_9
  • Odkaz: https://doi.org/10.1007/978-3-642-27272-1_9
  • Pracoviště: Katedra kybernetiky, Katedra počítačů
  • Anotace:
    This paper describes a simple method of dead-reckoning based on off-the-shelf components: optical mouses and a laptop. The problem is formulated as finding a transformation of mouses positions to position of the robot. The formulation of the transformation is based on a method already used in range-based localization. Beside a solution of the transformation, the paper provides description of practical application of mouse based localization for a home made robot. The paper considers identification and mouse data reading procedures as well. The presented approach has been evaluated in several real experiments and the proposed localization provides competitive results to the odometry based on high-precision stepper motors.

Motion planning and coordination of heterogenerous formations of mobile robots and unmanned aerial vehicles

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The formations of mobile robots can accomplish more challenging tasks than a single robot. However current methods for formation control rely on accurate localization of the robots within the formation. In this paper we propose a localization and navigation system for a formation of autonomous robots. In our method the robots within the formation are localized using camera data from a helicopter flying above the formation. The proposed localization system has been experimentally verified in a search & rescue scenario.

The Minor Specialization Robotics at FEE CTU in Prague

  • Autoři: Kulich, M., Přeučil, L., Košnar, K., doc. Ing. Tomáš Krajník, Ph.D., Chudoba, J.
  • Publikace: Proceedings of 2nd International Conference on Robotics in Education. Vienna: INNOC - Austrian Society for Innovative Computer Sciences, 2011, pp. 71-78. ISBN 978-3-200-02273-7. Available from: http://www.innoc.at/fileadmin/user_upload/_temp_/RiE/Proceedings/43.pdf
  • Rok: 2011
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The following paper presents core concepts and ideas of the minor entitled Robotics, which has launched in 2010 and guides students from fundamental concepts of information processing in robotics and basic robot control to latest approaches to robot autonomy, cognition, collective robotics and intelligent mobile robotics.

Uncertainty of Mobile Robot Localization in Cooperative Inspection Tasks

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper presents an overview of the rst year of a two year project aiming at multi-goal path planning and navigation methods for a team of real mobile robots performing an inspection task in known environments. The real robots are considered and therefore their localization estimation provides position with an uncertainty. This uncertainty is modeled by an iterative equation, which describes the evolution of a position error for a robot navigating by a reliable and provably stable method. In this project, we consider the model in the path planning algorithm for nding a path visiting given set of locations. The proposed planning procedure considers not only the length of the planned path, but also keeps the robot position error low at the locations. The experiments show that although the planned path is longer, the reliability of visiting the inspected locations is increased.

A Monocular Navigation System for RoboTour Competition

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we present a mobile robot navigation system used in the RoboTour challenge. We describe the basic principles of the navigation methods and show how to combine monocular vision and odometry. We propose to use the monocular vision to determine only the robot's heading and the odometry to estimate only the traveled distance. We show that the heading estimation itself can suppress odometric cumulative errors and outline a mathemtical proof of this statement. The practical result of the proof is that even simple algorithms capable to estimate just the heading can be used as a base for "record and replay" techniques. Beside the navigational principles, practical implementation of our navigation system is described. It is based on image processing algorithms for path following and landmark-based crossing traversing. An overview of experimental results is presented as well.

A Simple Yet Reliable Visual Navigation System

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present a simple monocular camera-based navigation system for an autonomous vehicle. The method utilizes off-the-shelf components (camera, compass and odometry) and standard algorithms. It does not require any additional infrastructure like radio beacons or GPS. The basic idea of our system is a simple, yet novel method of position estimation based on monocular vision and odometry. Contrary to traditional localization methods, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. A monocular vision technique determines heading of the vehicle and the odometry is used to estimate the traveled distance. Though the system is simple, it can deal with variable illumination, seasonal changes of the environment, dynamic objects and obstacles. We believe, that our navigation method is useful in areas, where GPS signal suffers from occlusions and reflections, e.g. forests or canyons.

A Visual Navigation System for RoboTour Competition

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we present our approach to the navigation system for the RoboTour challenge. We describe our intentions, ideas and main principles of our navigation methods which lead to the system that won in years 2008 and 2009. The main idea of our system is a simple yet novel method of position estimation based on monocular vision and odometry. Unlike in other systems, the monocular vision is used to determine only the robot's heading and the odometry is used to estimate only the traveled distance. We show that the heading estimation itself can suppress odometric cumulative errors and prove this statement mathematically and experimentally. The practical result of the proof is that even simple algorithms capable to estimate just the heading can be used as a base for ``record and replay'' techniques. Beside the navigational principles, practical implementation of our navigation system is described.

Airport snow shoveling

  • DOI: 10.1109/IROS.2010.5653747
  • Odkaz: https://doi.org/10.1109/IROS.2010.5653747
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we present results of a feasibility study of airport snow shoveling with multiple formations of autonomous snowplow robots. The main idea of the approach is to form temporary coalitions of vehicles, whose size depends on the width of the roads to be cleaned. We propose to divide the problem of snow shoveling into the subproblems of task allocation and motion coordination. For the task allocation we designed a multi-agent method applicable in the dynamic environment of airports. The motion coordination part focuses on generating trajectories for the vehicle formations based on the output of the task allocation module. Furthermore, we have developed a novel approach of formation stabilization into variable shapes depending on the width of runways.

Cognitive World Modeling

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The chapter introduces possibilities and principal approaches to knowledge gathering, preprocessing and keeping in autonomous mobile robots' artificial organisms. These may comprise "classical AI'' concepts as well as "new AI principles'', whereas both approaches themselves may bring up either major advantages, or suffer from certain drawbacks.

Jak to vidí roboti - I

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Ačkoliv existuje mnoho definic inteligence, většina se shoduje v tom, že inteligence zahrnuje pochopení a porozumění okolního světa. Skutečně inteligentní robot tedy musí umět vnímat okolní prostředí a data získaná svými senzory přeměňovat na informace o svém okolí. V robotice se sice využívají především dálkoměrné senzory umožňující jednoduše detekovat okolní objekty, ale v posledních letech se stále více prosazují kamery.

Jak to vidí roboti - II

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    V tomto díle si vystačíme s notebookem, který bude robota v budoucnu řídit. Nejprve se budeme věnovat získání obrazu, tj. jeho přenesení do řídicího počítače robota. Budeme používat software, který dokáže vyčítat obraz z kamery a zobrazovat ho na displej.

Jak to vidí roboti - III

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    V tomto díle představíme jednoduchou metodu, která umožní robotovi sledovat objekt s předem známou barvou. Předtím se seznámíme s některými parametry kamer, jejichž nastavení ovlivňuje kvalitu pořízeného obrazu a tím i spolehlivost robotických aplikací.

Jak to vidí roboti - IV

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    V minulém díle jsme představili jednoduché algoritmy pro hledání objektů v obraze a ukázali jsme, jak tyto algoritmy využít k řízení robotu. V tomto díle se zaměříme na zdokonalení a zefektivnění těchto metod. Ukážeme si, jak automaticky nastavovat expozici kamery tak, aby získaný obraz byl vhodný pro další zpracování.Zrychlíme metodu porovnávající daný pixel s naučeným vzorem a algoritmus hledání středů barevných objektů vylepšíme tak, aby se vyrovnal s větším počtem objektů hledané barvy.

Jak to vidí roboti - V

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    V minulém díle jsme zdokonalili naše algoritmy pro vyhledávání objektů dané barvy a ukázali si, jak automaticky nastavovat expozici kamery tak, aby náš obraz byl vhodný k dalšímu zpracování. Dnes popíšeme jednoduchý algoritmus pro rozpoznávání cesty v obraze a představíme dva roboty, na kterých jsme naše algoritmy odzkoušeli.

Jak to vidí roboti - VI

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    V předchozích dílech jsme si ukázali jednoduché metody, které umožňují robotu vnímat své okolí a realizovat jednoduchá chování. Abychom mohli používat komplikovanější metody počítačového vidění, musíme mít možnost vyjádřit geometrické zákonitosti popisující vztahy snímané scény a obrazu.

Large Scale Mobile Robot Navigation and Map Building

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Workshop 2010. Praha: České vysoké učení technické v Praze, 2010, pp. 76-77. CTU Reports. ISBN 978-80-01-04513-8.
  • Rok: 2010
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present a simple monocular navigation system for a mobile robot based on map and replay technique. The presented method is robust, easy to implement, does not require sensor calibration or structured environment and its computational complexity is independent of the environment size. The method can navigate a robot while sensing only one landmark at a time, making it more robust than other monocular approaches. The aforementioned properties of the method allow even low-cost robots effectivelly act in large outdoor and indoor environments with only natural landmarks.

Mobile Robotics at FEE CTU

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we describe concepts and main ideas of the labs of the Mobile Robotics course at FEE, CTU in Prague. We present our gained experience from three years of teaching of the course. We consider the students' contact with real hardware and real sensor data as the most important part of mobile robotics as the mobile robot can quickly lose information about its position in contrast to stationary robotic manipulators. To achieve our desired pedagogical goals we have decided to develop a new small platform that will be based mostly on off-the-shelf components and it will have sufficient computation power to use the Player robotic framework. The labs are organized into four consecutive assignments and a final assignment that combines particular students' results from the previous tasks. The final assignment is to create an algorithm that navigates the mobile robot in order to create a topological map of the environment and reuse this map for later navigation.

Mobile Robotics at FEE CTU

  • Autoři: prof. Ing. Jan Faigl, Ph.D., doc. Ing. Tomáš Krajník, Ph.D., Košnar, K., Szücsová, H., Chudoba, J., Grimmer, V., Přeučil, L.
  • Publikace: First International Conference on Robotics in Education, Bratislava. Bratislava: Slovak University of Technology in Bratislava, 2010. pp. 43-48. ISBN 978-80-227-3353-3.
  • Rok: 2010
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In this paper, we describe concepts and main ideas of the labs of the Mobile Robotics course at FEE, CTU in Prague. We present our gained experience from three years of teaching of the course. We consider the students' contact with real hardware and real sensor data as the most important part of mobile robotics as the mobile robot can quickly lose information about its position in contrast to stationary robotic manipulators. To achieve our desired pedagogical goals we have decided to develop a new small platform that will be based mostly on off-the-shelf components and it will have sufficient computation power to use the Player robotic framework. The labs are organized into four consecutive assignments and a final assignment that combines particular students' results from the previous tasks. The final assignment is to create an algorithm that navigates the mobile robot in order to create a topological map of the environment and reuse this map for later navigation.

RoboTour 2009 - soutěž outdoorových robotů

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Koncem září minulého roku proběhl v brněnském parku Lužánky čtvrtý ročník soutěže autonomních robotů RoboTour 2009. Soutěž je zjednodušenou analogií orientačního běhu pro mobilní roboty.

Simple, Yet Stable Bearing-Only Navigation

  • DOI: 10.1002/rob.20354
  • Odkaz: https://doi.org/10.1002/rob.20354
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This article describes a simple monocular navigation system for a mobile robot based on the map-and-replay technique. The presented method is robust, easy to implement, does not require sensor calibration or structured environment and its computational complexity is independent of the environment size. The method can navigate a robot while sensing only one landmark at a time, making it more robust than other monocular approaches. The aforementioned properties of the method allow even low-cost robots to effectively act in large outdoor and indoor environments with natural landmarks only. The basic idea is to utilize a monocular vision to correct only the robot's heading and leaving distance measurements just to the odometry. The heading correction itself can suppress the odometric error and prevent the overall position error from diverging. The influence of a map-based heading estimation and odometric errors on the overall position uncertainty is examined.

Surveillance Planning with Localization Uncertainty for UAVs

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper presents a new multi-goal path planning method that incorporates the localization uncertainty in a visual inspection surveillance task. It is shown that the reliability of the executed found plan is increased if the localization uncertainty of the used navigation method is taken into account during the path planning. The navigation method follows the map&replay technique based on a combination of monocular vision and dead-reckoning. The mathematical description of the navigation method allows efficient computation of the evolution of the robot position uncertainty that is used in the proposed path planning algorithm. The algorithm minimizes the length of the inspection path while the robot position error at the goals is decreased. The presented experimental results indicate that probability of the goals visits can be increased by the proposed algorithm.

A Mobile Robot for EUROBOT Mars Challenge

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Chudoba, J., Fišer, O.
  • Publikace: Research and Education in Robotics: EUROBOT 2008 - Revised Selected Papers. Heidelberg: Springer, 2009, pp. 107-118. Communications in Computer and Information Science. ISSN 1865-0929. ISBN 978-3-642-03557-9.
  • Rok: 2009
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The aim of this paper is to present an intelligent autonomous robot for competition EUROBOT 08. In this "Mission to Mars", two robots attempt to gather, sort and dump objects scattered on a planar rectangular play-field. This paper descripts robot hardware, i.e. Electromechanics of drive, chassis and extraction mechanism, and software, i.e. localization, collision avoidance, motion control and planning algorithms. The experience gained by participating on both national and international round is evaluated.

A Mobile Robot for Small Object Handling

  • Autoři: Fišer, O., Szücsová, H., Grimmer, V., Popelka, J., Ing. Vojtěch Vonásek, Ph.D., doc. Ing. Tomáš Krajník, Ph.D., Chudoba, J.
  • Publikace: EUROBOT 2009 - International Conference on Research and Education in Robotics. Berlin: Springer-Verlag, 2009. pp. 47-60. ISSN 1865-0929. ISBN 978-3-642-16369-2.
  • Rok: 2009
  • DOI: 10.1007/978-3-642-16370-8_5
  • Odkaz: https://doi.org/10.1007/978-3-642-16370-8_5
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The aim of this paper is to present an intelligent autonomous robot capable of small object manipulation. The design of the robot is influenced mainly by the rules of EUROBOT 09 competition. In this challenge, two robots pick up objects scattered on a planar rectangular playfield and use these elements to build models of Hellenistic temples. This paper describes the robot hardware, i.e. electro-mechanics of the drive, chassis and manipulator, as well as the software, i.e. localization, collision avoidance, motion control and planning algorithms.

FPGA-based Speeded Up Robust Features

  • DOI: 10.1109/TEPRA.2009.5339646
  • Odkaz: https://doi.org/10.1109/TEPRA.2009.5339646
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present an implementation of the Speeded Up Robust Features (SURF) on a Field Programmable Gate Array (FPGA). The SURF algorithm extracts salient points from image and computes descriptors of their surroundings that are invariant to scale, rotation and illumination changes. The interest point detection and feature descriptor extraction algorithm is often used as the first stage in autonomous robot navigation, object recognition and tracking etc. However, detection and extraction are computationally demanding and therefore can't be used in systems with limited computational power. We took advantage of algorithm's natural parallelism and implemented it's most demanding parts in FPGA logic. Several modifications of the original algorithm have been made to increase it's suitability for FPGA implementation.

Informed Rapidly Exploring Random Tree

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    During last decade Rapidly Exploring RandomTrees (RRT) became widely used for solving a motion planningproblem in mobile robotics. Poor performance of thealgorithm in environments with narrow passages has been noticed and several methods were proposed to improve the performance of the algorithm in such environments. This paper describes a novel approach for improving the performance of the RRT algorithm in environments with narrow passages. The proposed method uses precomputed auxiliary path which guides the RRT algorithm through the environment. The proposed method has been compared with several RRT-like algorithms. The results show that proposed algorithm can find the result in shorter time compared to other RRT-like algorithms.

LaMa - Large Maps Framework

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The Large Maps framework is intended to acquire, store and handle spatial knowledge about large diverse environments. The framework focuses on providing information suitable for navigation, localization, spatial reasoning, planning, human-machine and machine-machine interaction. Framework stores information necessary to know how to distinguish one place from another and to traverse between them. Modular architecture allows incorporation and cooperation of methods, sensors and behaviors in this framework.

Mobile Robot for "Mission to Mars" challenge

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Each year, the EUROBOT association hosts a robotic competition. This competition is organized for students, young scientists and all people who are interested in robotics. This challenge allows people from different countries chance to share experience and exchange new ideas. For the second year a robotic team called "FelBot" prepared an intelligent autonomous robot for this challenge. To make a robot means not only developing robot hardware, e.i. Chassis or electromechanics of drive, but also equipping robot with software, like collision avoidance, planning algorithms, localization etc. This paper describes a mobile robot built by the "FelBot" team for this robotic cup.

Robot z ČVUT vítězem RoboTour 2009

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Koncem září proběhl v brněnském parku Lužánky čtvrtý ročník soutěže autonomních robotů RoboTour 2009. Z patnácti soutěžících týmů se na prvním místě umístiltým LEE z katedry kybernetiky FEL ČVUT s 218 body, na druhém místě skončil brněnský tým RoboAuto s 168 body a třetí místo obsadil tým Radioklubu Písek s 56 body.

RRT-Path: a guided Rapidly exploring Random Tree

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    During last decade Rapidly Exploring random trees (RRT) became widely used for solving a motion planning problem in various areas. Poor performance of these algorithms has been noticed in environments with narrow passages. Several variants have been developed to address this issue. This paper presents a new variation of the RRT designed for sampling environments with narrow passages. Performance of the proposed method has been experimentally verified and results are compared with the original RRT, RRT-Bidirectional na RRT-Blossom algorithms.

A robot for Mission to Mars

  • Autoři: Fišer, O., Chudoba, J., doc. Ing. Tomáš Krajník, Ph.D.,
  • Publikace: Proceedings of the EUROBOT Conference 2008. Praha: MATFYZPRESS, vydavatelství Matematicko-fyzikální fakulty UK, 2008, pp. 157-166. ISBN 978-80-7378-042-5.
  • Rok: 2008
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The aim of this paper is to present an intelligent autonomous robot for competition EUROBOT 08. In this "Mission to Mars", two robots attempt to gather, sort and dump objects scattered on a planar rectangular play-field. This paper descripts both robot hardware, i.e. electromechanics of drive, chassis and extraction mechanism, and software, i.e. localization, collision avoidance, motion control and planning algorithms

A Simple Visual Navigation System with Convergence Property

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Přeučil, L.
  • Publikace: European Robotics Symposium 2008. Heidelberg: Springer, 2008. p. 283-292. ISSN 1610-7438. ISBN 978-3-540-78315-2.
  • Rok: 2008
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The aim of this paper is to present a convergence property of a simple vision based navigation system for a mobile robot. A robot equipped with a single camera is guided by a human operator along a path consisting of straight segments. During this guided tour, local image invariants are extracted from acquired frames and odometric data are collected. When navigating learned path, the vision is used to reckon direction to the start point of next straight segment. Odometric measurements are utilized to estimate distance to this point. A simple linear model of this navigation system is lined up and its properties are examined. We proclaim a theorem, which states, that for a limited odometric error and ''reasonable'' trajectory, the robot uncertainty in position estimation does not diverge. A formal proof of this theorem is given for regular polygonal trajectories. The proclaimed convergence theorem is also experimentally verified.

A Simple Visual Navigation System with Convergence Property

  • Autoři: doc. Ing. Tomáš Krajník, Ph.D., Přeučil, L.
  • Publikace: Proceedings of Workshop 2008. Praha: Czech Technical University in Prague, 2008, ISBN 978-80-01-04016-4.
  • Rok: 2008
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    In recent years, as the computational power of common systems increased and image processing became possible in real-time, the means of using vision to navigate mobile robots have been investigated. We have developed a simple Map-building based [1] system, which utilizes a single camera. Similar to [2],[3] our system has to learn the environment during a teleoperated drive. Unlike other visual navigation systems, which are based on direction assessment and recognition of significant places in the environment, we use camera sensing only to correct small-scale errors in movement direction. Positions of significant locations, i.e. places where the robot changes its movement direction significantly, are estimated by odometric measurements. We explore properties of such landmark navigation and state that for some trajectories, the camera readings can correct odometry imprecision without explicitly localizing the robot.

AI Support for a Gaze-Controlled Wheelchair

  • Autoři: Novák, P., doc. Ing. Tomáš Krajník, Ph.D., Přeučil, L., Fejtová, M., Štěpánková, O.
  • Publikace: Proceedings of COGAIN 2008 'Communication, Environment and Mobility Control by Gaze'. Praha: CTU Publishing House, 2008, pp. 19-22. ISBN 978-80-01-04151-2.
  • Rok: 2008
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    The I4Control system is a wearable system for gaze-computer interaction that is able to simulate the function of a joystick or to ensure a selection from a grid-like structure using an appropriate GUI. Consequently, I4Control can serve as a single input device for control of a wheelchair in both upper mentioned modes. This is certainly true from purely technical point of view. But this is not enough because safety of the resulting system has to be ensured! That is why special attention has to be given to the questions concerning reliability of the acquired signal and various ways it can be influenced or obscured. The fundamental danger related to gaze-based control comes from physiological reactions to certain stimuli that we have "build-in" to protect our eyes and even ourselves: we close eyes when a strong light flashes, we look into the direction of a loud sound, etc.

Visual Topological Mapping

  • Autoři: Košnar, K., doc. Ing. Tomáš Krajník, Ph.D., Přeučil, L.
  • Publikace: European Robotics Symposium 2008. Heidelberg: Springer, 2008. p. 333-342. ISSN 1610-7438. ISBN 978-3-540-78315-2.
  • Rok: 2008
  • Pracoviště: Katedra kybernetiky
  • Anotace:
    We present an outdoor topological exploration system based on visual recognition. Robot moves through a graph-like environment and creates a topological map, where edges represent paths and vertices their intersections.The algorithm can handle indistinguishable crossings and close loops in the environment with the help of one marked place. The visual navigation system supplies path traversing and crossing detection abilities. Path traversing is purely reactive and relies on color segmentation of an image taken by on-board camera. The crossing passage algorithm reports azimuths of paths leading out of a crossing to the topological subsystem, which decides what path to traverse next. Compass and odometry is then utilized to move the robot to the beginning of picked path. The proposed system performance is tested in simulated and real outdoor environment using a P3AT robotic platform.

Coordination of mobile robot group for unknown environment mapping

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This contribution describes robotic system for cooperative outdoor exploration. In our case, we use outdoor robotic platforms P3AT communicating over a wireless network each equipped with sweeping laser rangefinder, monocular camera, GPS and compass. Information obtained via global positioning system, compass and odometry are fused by kalman methods to provide position estimation to support global localization. Downward inclined laser rangefinder is used for reliable detection of dynamic obstacles and recognition of traversable terrain. The information about texture of traversable terrain and obstacles is then transferred to the vision system and a local map is created. This local map in form of occupancy grid is utilized for fast collision avoidance algorithms. As the robot moves, local maps are compiled to create a global one, where frontier regions can be detected and distributed among individual robots. Laser rangefinder can be swept to obtain above-ground 3D data.

Od osamocených robotů ke kolaborativní robotice

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    Od osamocených robotů ke kolaborativní robotice

Decision Support by Simulation in a Robotic Soccer Domain

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper describes a decision support system for a robotic soccer team. In a software system managing robots during a soccer match, this component acts as an information provider for other system components. The core of this decision support system is a physics simulator, which can predict game situation in the moment, when a particular decision is delivered to robotic players. This compensates the negative effects of transport delays originating in the system. Moreover, testing of planning and control algorithms on a virtual system is also made possible by this support module. The most important contribution of this component is a strategic decision support by in-game feasibility testing of proposed actions. This is done by evaluating the outcome of such action by physical simulation.

Reasoning and planning for robotsoccer

  • Pracoviště: Katedra kybernetiky
  • Anotace:
    This paper presents architecture of system G-Bots for robotic soccer. Apart from our open architecture description, we focus on new approaches applied in reasoning and planning system components.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk