Všechny publikace

End-to-end Differentiable Model of Robot-terrain Interactions

  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We propose a differentiable model of robot-terrain interactions that delivers the expected robot trajectory given an onboard camera image and the robot control. The model is trained on a real dataset that covers various terrains ranging from vegetation to man-made obstacles. Since robot-endangering interactions are naturally absent in real-world training data, the consequent learning of the model suffers from training/testing distribution mismatch, and the quality of the result strongly depends on generalization of the model. Consequently, we propose a grey-box, explainable, physics-aware, and end-to-end differentiable model that achieves better generalization through strong geometrical and physical priors. Our model, which functions as an image-conditioned differentiable simulation, can generate millions of trajectories per second and provides interpretable intermediate outputs that enable efficient self-supervision. Our experimental evaluation demonstrates that the model outperforms state-of-the-art methods.

MonoForce: Self-supervised Learning of Physics-informed Model for Predicting Robot-terrain Interaction

  • DOI: 10.1109/IROS58592.2024.10801353
  • Odkaz: https://doi.org/10.1109/IROS58592.2024.10801353
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    While autonomous navigation of mobile robots on rigid terrain is a well-explored problem, navigating on deformable terrain such as tall grass or bushes remains a challenge. To address it, we introduce an explainable, physicsaware and end-to-end differentiable model which predicts the outcome of robot-terrain interaction from camera images, both on rigid and non-rigid terrain. The proposed MonoForce model consists of a black-box module which predicts robotterrain interaction forces from onboard cameras, followed by a white-box module, which transforms these forces and a control signals into predicted trajectories, using only the laws of classical mechanics. The differentiable white-box module allows backpropagating the predicted trajectory errors into the black-box module, serving as a self-supervised loss that measures consistency between the predicted forces and groundtruth trajectories of the robot. Experimental evaluation on a public dataset and our data has shown that while the prediction capabilities are comparable to state-of-the-art algorithms on rigid terrain, MonoForce shows superior accuracy on nonrigid terrain such as tall grass or bushes. To facilitate the reproducibility of our results, we release both the code and datasets.

Self-Supervised Depth Correction of Lidar Measurements From Map Consistency Loss

  • DOI: 10.1109/LRA.2023.3287791
  • Odkaz: https://doi.org/10.1109/LRA.2023.3287791
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Depth perception is considered an invaluable source of information in the context of 3D mapping and various robotics applications. However, point cloud maps acquired using consumer-level light detection and ranging sensors (lidars) still suffer from bias related to local surface properties such as measuring beam-to-surface incidence angle. This fact has recently motivated researchers to exploit traditional filters, as well as the deep learning paradigm, in order to suppress the aforementioned depth sensors error while preserving geometric and map consistency details. Despite the effort, depth correction of lidar measurements is still an open challenge mainly due to the lack of clean 3D data that could be used as ground truth. In this letter, we introduce two novel point cloud map consistency losses, which facilitate self-supervised learning on real data of lidar depth correction models. Specifically, the models exploit multiple point cloud measurements of the same scene from different view-points in order to learn to reduce the bias based on the constructed map consistency signal. Complementary to the removal of the bias from the measurements, we demonstrate that the depth correction models help to reduce localization drift.

Trajectory Optimization using Learned Robot-Terrain Interaction Model in Exploration of Large Subterranean Environments

  • DOI: 10.1109/LRA.2022.3147332
  • Odkaz: https://doi.org/10.1109/LRA.2022.3147332
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We consider the task of active exploration of large subterranean environments with a ground mobile robot. Our goal is to autonomously explore a large unknown area and to obtain an accurate coverage and localization of objects of interest (artifacts). The exploration is constrained by the restricted operation time in rescue scenarios, as well as a hard rough terrain. To this end, we introduce a novel optimization strategy that respects these constraints by maximizing the environment coverage by onboard sensors while producing feasible trajectories with the help of a learned robot-terrain interaction model. The approach is evaluated in diverse subterranean simulated environments showing the viability of active exploration in challenging scenarios. In addition, we demonstrate that the local trajectory optimization improves global coverage of an environment as well as the overall object detection results.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk