Lidé

Ing. Lukáš Rustler

Všechny publikace

Interactive learning of physical object properties through robot manipulation and database of object measurements

  • Pracoviště: Katedra kybernetiky, Vidění pro roboty a autonomní systémy
  • Anotace:
    This work presents a framework for automatically extracting physical object properties, such as material compo sition, mass, volume, and stiffness, through robot manipulation and a database of object measurements. The framework in volves exploratory action selection to maximize learning about objects on a table. A Bayesian network models conditional dependencies between object properties, incorporating prior probability distributions and uncertainty associated with mea surement actions. The algorithm selects optimal exploratory actions based on expected information gain and updates object properties through Bayesian inference. Experimental evaluation demonstrates effective action selection compared to a baseline and correct termination of the experiments if there is nothing more to be learned. The algorithm proved to behave intelligently when presented with trick objects with material properties in conflict with their appearance. The robot pipeline integrates with a logging module and an online database of objects, con taining over 24,000 measurements of 63 objects with different grippers. All code and data are publicly available, facilitating automatic digitization of objects and their physical properties through exploratory manipulations.

Efficient Visuo-Haptic Object Shape Completion for Robot Manipulation

  • DOI: 10.1109/IROS55552.2023.10342200
  • Odkaz: https://doi.org/10.1109/IROS55552.2023.10342200
  • Pracoviště: Skupina vizuálního rozpoznávání, Vidění pro roboty a autonomní systémy
  • Anotace:
    For robot manipulation, a complete and accurate object shape is desirable. Here, we present a method that combines visual and haptic reconstruction in a closed-loop pipeline. From an initial viewpoint, the object shape is reconstructed using an implicit surface deep neural network. The location with highest uncertainty is selected for haptic exploration, the object is touched, the new information from touch and a new point cloud from the camera are added, object position is re-estimated and the cycle is repeated. We extend Rustler et al. (2022) by using a new theoretically grounded method to determine the points with highest uncertainty, and we increase the yield of every haptic exploration by adding not only the contact points to the point cloud but also incorporating the empty space established through the robot movement to the object. Additionally, the solution is compact in that the jaws of a closed two-finger gripper are directly used for exploration. The object position is re-estimated after every robot action and multiple objects can be present simultaneously on the table. We achieve a steady improvement with every touch using three different metrics and demonstrate the utility of the better shape reconstruction in grasping experiments on the real robot. On average, grasp success rate increases from 63.3 % to 70.4 % after a single exploratory touch and to 82.7% after five touches. The collected data and code are publicly available (https://osf.io/j6rkd/, https://github.com/ctu-vras/vishac).

Active Visuo-Haptic Object Shape Completion

  • DOI: 10.1109/LRA.2022.3152975
  • Odkaz: https://doi.org/10.1109/LRA.2022.3152975
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Recent advancements in object shape completion have enabled impressive object reconstructions using only visual input. However, due to self-occlusion, the reconstructions have high uncertainty in the occluded object parts, which negatively impacts the performance of downstream robotic tasks such as grasping. In this letter, we propose an active visuo-haptic shape completion method called Act-VH that actively computes where to touch the objects based on the reconstruction uncertainty. Act-VH reconstructs objects from point clouds and calculates the reconstruction uncertainty using IGR, a recent state-of-the-art implicit surface deep neural network. We experimentally evaluate the reconstruction accuracy of Act-VH against five baselines in simulation and in the real world. We also propose a new simulation environment for this purpose. The results show that Act-VH outperforms all baselines and that an uncertainty-driven haptic exploration policy leads to higher reconstruction accuracy than a random policy and a policy driven by Gaussian Process Implicit Surfaces. As a final experiment, we evaluate Act-VH and the best reconstruction baseline on grasping 10 novel objects. The results show that Act-VH reaches a significantly higher grasp success rate than the baseline on all objects. Together, this letter opens up the door for using active visuo-haptic shape completion in more complex cluttered scenes.

Effect of Active and Passive Protective Soft Skins on Collision Forces in Human-robot Collaboration

  • DOI: 10.1016/j.rcim.2022.102363
  • Odkaz: https://doi.org/10.1016/j.rcim.2022.102363
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Soft electronic skins are one of the means to turn a classical industrial manipulator into a collaborative robot. For manipulators that are already fit for physical human–robot collaboration, soft skins can make them even safer. In this work, we study the after impact behavior of two collaborative manipulators (UR10e and KUKA LBR iiwa) and one classical industrial manipulator (KUKA Cybertech), in the presence or absence of an industrial protective skin (AIRSKIN). In addition, we isolate the effects of the passive padding and the active contribution of the sensor to robot reaction. We present a total of 2250 collision measurements and study the impact force, contact duration, clamping force, and impulse. This collected dataset is publicly available. We summarize our results as follows. For transient collisions, the passive skin properties lowered the impact forces by about 40 %. During quasi-static contact, the effect of skin covers – active or passive – cannot be isolated from the collision detection and reaction by the collaborative robots. Important effects of the stop categories triggered by the active protective skin were found. We systematically compare the different settings and compare the empirically established safe velocities with prescriptions by the ISO/TS 15066. In some cases, up to the quadruple of the ISO/TS 15066 prescribed velocity can comply with the impact force limits and thus be considered safe. We propose an extension of the formulas relating impact force and permissible velocity that take into account the stiffness and compressible thickness of the protective cover, leading to better predictions of the collision forces. At the same time, this work emphasizes the need for in situ measurements as all the factors we studied – presence of active/passive skin, safety stop settings, robot collision reaction, impact direction, and, of course, velocity – have effects on the force evolution after impact.

3D Collision-Force-Map for Safe Human-Robot Collaboration

  • DOI: 10.1109/ICRA48506.2021.9561845
  • Odkaz: https://doi.org/10.1109/ICRA48506.2021.9561845
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The need to guarantee safety of collaborative robots limits their performance, in particular, their speed and hence cycle time. The standard ISO/TS 15066 defines the Power and Force Limiting operation mode and prescribes force thresholds that a moving robot is allowed to exert on human body parts during impact, along with a simple formula to obtain maximum allowed speed of the robot in the whole workspace. In this work, we measure the forces exerted by two collaborative manipulators (UR10e and KUKA LBR iiwa) moving downward against an impact measuring device. First, we empirically show that the impact forces can vary by more than 100 percent within the robot workspace. The forces are negatively correlated with the distance from the robot base and the height in the workspace. Second, we present a data-driven model, 3D Collision-Force-Map, predicting impact forces from distance, height, and velocity and demonstrate that it can be trained on a limited number of data points. Third, we analyze the force evolution upon impact and find that clamping never occurs for the UR10e. We show that formulas relating robot mass, velocity, and impact forces from ISO/TS 15066 are insufficient—leading both to significant underestimation and overestimation and thus to unnecessarily long cycle times or even dangerous applications. We propose an empirical method that can be deployed to quickly determine the optimal speed and position where a task can be safely performed with maximum efficiency.

Multisensorial robot calibration framework and toolbox

  • Autoři: Rozlivek, J., Ing. Lukáš Rustler, Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids). Piscataway: IEEE, 2021. p. 459-466. ISSN 2164-0580. ISBN 978-1-7281-9372-4.
  • Rok: 2021
  • DOI: 10.1109/HUMANOIDS47582.2021.9555803
  • Odkaz: https://doi.org/10.1109/HUMANOIDS47582.2021.9555803
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).

Spatial calibration of whole-body artificial skin on a humanoid robot: comparing self-contact, 3D reconstruction, and CAD-based calibration

  • Autoři: Ing. Lukáš Rustler, Potočná, B., Polic, M., Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids). Piscataway: IEEE, 2021. p. 445-452. ISSN 2164-0580. ISBN 978-1-7281-9372-4.
  • Rok: 2021
  • DOI: 10.1109/HUMANOIDS47582.2021.9555806
  • Odkaz: https://doi.org/10.1109/HUMANOIDS47582.2021.9555806
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk