Lidé

doc. Mgr. Matěj Hoffmann, Ph.D.

Všechny publikace

Best Practices for Differentiable Soft Robot Modeling and Optimization with the Material Point Method

  • Autoři: Bielewski, K., Rozlivek, J., doc. Mgr. Matěj Hoffmann, Ph.D., Bongard, J.
  • Publikace: ALIFE 2024: Proceedings of the 2024 Artificial Life Conference. Cambridge: The MIT Press, 2024. p. 87-94.
  • Rok: 2024
  • DOI: 10.1162/isal_a_00724
  • Odkaz: https://doi.org/10.1162/isal_a_00724
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Using soft materials to build robots is becoming increasingly popular as the importance of morphological complexity in robot design becomes apparent. Additionally, physics simulators which are differentiable are increasingly being used to optimize robot morphologies and controllers over e.g. evolutionary algorithms due to the computational efficiency of gradient descent. One of the most commonly used methods to simulate soft materials is the Material Point Method (MPM), and soft roboticists have implemented the MPM in differentiable robotics simulations and successfully transferred their optimized designs to the real world, validating this approach for real-world soft robot design. However, choosing parameters for MPM that render it stable in a differentiable simulator are non-obvious. For this reason, here we introduce for the first time a set of best practices for employing the MPM to design and optimize soft robots using differentiable physics engines. We perform grid searches over many of the parameters involved in MPM to determine simulation stability ranges and performant parameter choices for a displacement task. This will allow newcomers to MPM simulation to rapidly iterate to find parameters for their application.

Interactive learning of physical object properties through robot manipulation and database of object measurements

  • Pracoviště: Katedra kybernetiky, Vidění pro roboty a autonomní systémy
  • Anotace:
    This work presents a framework for automatically extracting physical object properties, such as material compo sition, mass, volume, and stiffness, through robot manipulation and a database of object measurements. The framework in volves exploratory action selection to maximize learning about objects on a table. A Bayesian network models conditional dependencies between object properties, incorporating prior probability distributions and uncertainty associated with mea surement actions. The algorithm selects optimal exploratory actions based on expected information gain and updates object properties through Bayesian inference. Experimental evaluation demonstrates effective action selection compared to a baseline and correct termination of the experiments if there is nothing more to be learned. The algorithm proved to behave intelligently when presented with trick objects with material properties in conflict with their appearance. The robot pipeline integrates with a logging module and an online database of objects, con taining over 24,000 measurements of 63 objects with different grippers. All code and data are publicly available, facilitating automatic digitization of objects and their physical properties through exploratory manipulations.

Online elasticity estimation and material sorting using standard robot grippers

  • DOI: 10.1007/s00170-024-13678-6
  • Odkaz: https://doi.org/10.1007/s00170-024-13678-6
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Stiffness or elasticity estimation of everyday objects using robot grippers is highly desired for object recognition or classification in application areas like food handling and single-stream object sorting. However, standard robot grippers are not designed for material recognition. We experimentally evaluated the accuracy with which material properties can be estimated through object compression by two standard parallel jaw grippers and a force/torque sensor mounted at the robot wrist, with a professional biaxial compression device used as reference. Gripper effort versus position curves were obtained and transformed into stress/strain curves. The modulus of elasticity was estimated at different strain points and the effect of multiple compression cycles (precycling), compression speed, and the gripper surface area on estimation was studied. Viscoelasticity was estimated using the energy absorbed in a compression/decompression cycle, the Kelvin-Voigt, and Hunt-Crossley models. We found that (1) slower compression speeds improved elasticity estimation, while precycling or surface area did not; (2) the robot grippers, even after calibration, were found to have a limited capability of delivering accurate estimates of absolute values of Young’s modulus and viscoelasticity; (3) relative ordering of material characteristics was largely consistent across different grippers; (4) despite the nonlinear characteristics of deformable objects, fitting linear stress/strain approximations led to more stable results than local estimates of Young’s modulus; and (5) the Hunt-Crossley model worked best to estimate viscoelasticity, from a single object compression. A two-dimensional space formed by elasticity and viscoelasticity estimates obtained from a single grasp is advantageous for the discrimination of the object material properties. We demonstrated the applicability of our findings in a mock single-stream recycling scenario, where plastic, paper, and metal objects were correctly separated from a single grasp, even when compressed at different locations on the object. The data and code are publicly available.

PreCNet: Next-Frame Video Prediction Based on Predictive Coding

  • DOI: 10.1109/TNNLS.2023.3240857
  • Odkaz: https://doi.org/10.1109/TNNLS.2023.3240857
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Predictive coding, currently a highly influential theory in neuroscience, has not been widely adopted in machine learning yet. In this work, we transform the seminal model of Rao and Ballard (1999) into a modern deep learning framework while remaining maximally faithful to the original schema. The resulting network we propose (PreCNet) is tested on a widely used next frame video prediction benchmark, which consists of images from an urban environment recorded from a car-mounted camera, and achieves state-of-the-art performance. Performance on all measures (MSE, PSNR, SSIM) was further improved when a larger training set (2M images from BDD100k), pointing to the limitations of the KITTI training set. This work demonstrates that an architecture carefully based in a neuroscience model, without being explicitly tailored to the task at hand, can exhibit exceptional performance.

Efficient Visuo-Haptic Object Shape Completion for Robot Manipulation

  • DOI: 10.1109/IROS55552.2023.10342200
  • Odkaz: https://doi.org/10.1109/IROS55552.2023.10342200
  • Pracoviště: Skupina vizuálního rozpoznávání, Vidění pro roboty a autonomní systémy
  • Anotace:
    For robot manipulation, a complete and accurate object shape is desirable. Here, we present a method that combines visual and haptic reconstruction in a closed-loop pipeline. From an initial viewpoint, the object shape is reconstructed using an implicit surface deep neural network. The location with highest uncertainty is selected for haptic exploration, the object is touched, the new information from touch and a new point cloud from the camera are added, object position is re-estimated and the cycle is repeated. We extend Rustler et al. (2022) by using a new theoretically grounded method to determine the points with highest uncertainty, and we increase the yield of every haptic exploration by adding not only the contact points to the point cloud but also incorporating the empty space established through the robot movement to the object. Additionally, the solution is compact in that the jaws of a closed two-finger gripper are directly used for exploration. The object position is re-estimated after every robot action and multiple objects can be present simultaneously on the table. We achieve a steady improvement with every touch using three different metrics and demonstrate the utility of the better shape reconstruction in grasping experiments on the real robot. On average, grasp success rate increases from 63.3 % to 70.4 % after a single exploratory touch and to 82.7% after five touches. The collected data and code are publicly available (https://osf.io/j6rkd/, https://github.com/ctu-vras/vishac).

Examining Tactile Feature Extraction for Shape Reconstruction in Robotic Grippers

  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Different robotic setups provide tactile feedback about the objects they interact with in different manners. This makes it difficult to transfer the information gained from haptic exploration to different setups and to humans as well. We introduce “touch primitives”, a set of object features for haptic shape representation which aim to reconstruct the shape of objects independent from the robot morphology. We investigate how precisely the primitives can be extracted from household objects by a commonly used gripper, on a set of objects that vary in size, shape and stiffness.

Goal-directed tactile exploration for body model learning through self-touch on a humanoid robot

  • DOI: 10.1109/TCDS.2021.3104881
  • Odkaz: https://doi.org/10.1109/TCDS.2021.3104881
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    An early integration of tactile sensing into motor coordination is the norm in animals, but still a challenge for robots. Tactile exploration through touches on the body gives rise to first body models and bootstraps further development such as reaching competence. Reaching to one’s own body requires connections of the tactile and motor space only. Still, the problems of high dimensionality and motor redundancy persist. Through an embodied computational model for the learning of self-touch on a simulated humanoid robot with artificial sensitive skin, we demonstrate that this task can be achieved (i) effectively and (ii) efficiently at scale by employing the computational frameworks for the learning of internal models for reaching: intrinsic motivation and goal babbling. We relate our results to infant studies on spontaneous body exploration as well as reaching to vibrotactile targets on the body. We analyze the reaching configurations of one infant followed weekly between 4 and 18 months of age and derive further requirements for the computational model: accounting for (iii) continuous rather than sporadic touch and (iv) consistent redundancy resolution. Results show the general success of the learning models in the touch domain, but also point out limitations in achieving fully continuous touch.

Perirobot Space Representation for HRI: Measuring and Designing Collaborative Workspace Coverage by Diverse Sensors

  • Autoři: Rozlivek, J., Švarný, P., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway: IEEE, 2023. p. 5958-5965. ISSN 2153-0866. ISBN 978-1-6654-9190-7.
  • Rok: 2023
  • DOI: 10.1109/IROS55552.2023.10341829
  • Odkaz: https://doi.org/10.1109/IROS55552.2023.10341829
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Two regimes permitting safe physical human-robot interaction, speed and separation monitoring and safety-rated monitored stop, depend on reliable perception of the space surrounding the robot. This can be accomplished by visual sensors (like cameras, RGB-D cameras, LIDARs), proximity sensors, or dedicated devices used in industrial settings like pads that are activated by the presence of the operator. The deployment of a particular solution is often ad hoc and no unified representation of the interaction space or its coverage by the different sensors exists. In this work, we make first steps in this direction by defining the spaces to be monitored, representing all sensor data as information about occupancy and using occupancy-based metrics to calculate how a particular sensor covers the workspace. We demonstrate our approach in two sensor-placement experiments in three static scenes and one experiment in a dynamic scene. The occupancy representation allow the comparison of the effectiveness of various sensor setups. Therefore, this approach can serve as a prototyping tool to establish the sensor setup that provides the most efficient coverage for the given metrics and sensor representations.

Shape Reconstruction Task for Transfer of Haptic Information between Robotic Setups

  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Robot morphology, which includes the physical dimension and shape but also the placement and type of actuators and sensors, is highly variable. This also applies to different robot hand and grippers, equipped with force or tactile sensors. Unlike in computer vision, where information from cameras is robot and largely camera-independent, haptic information is morphology-dependent, which makes it difficult to transfer object recognition and other pipelines between setups. In this work, we introduce a shape reconstruction and grasping task to evaluate the success of haptic information transfer between robotic setups, and propose feature descriptors that can help in standardizing the haptic representation of shapes across different robotic setups.

Tactile training facilitates infants' ability to reach to targets on the body

  • Autoři: Somogyi, E., Hamilton, M., Chinn, L.K., Jacquey, L., Heed, T., doc. Mgr. Matěj Hoffmann, Ph.D., Lockman, J.J., Fagard, J., O'Regan, K.
  • Publikace: Child Development. 2023, 94(3), e154-e165. ISSN 1467-8624.
  • Rok: 2023
  • DOI: 10.1111/cdev.13891
  • Odkaz: https://doi.org/10.1111/cdev.13891
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    This longitudinal study investigated the effect of experience with tactile stimulation on infants' ability to reach to targets on the body, an important adaptive skill. Infants were provided weekly tactile stimulation on eight body locations from 4 to 8 months of age (N = 11), comparing their ability to reach to the body to infants in a control group who did not receive stimulation (N = 10). Infants who received stimulation were more likely to successfully reach targets on the body than controls by 7 months of age. These findings indicate that tactile stimulation facilitates the development of reaching to the body by allowing infants to explore the sensorimotor correlations emerging from the stimulation.

Touch Primitives for Gripper-Independent Haptic Object Modeling

  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Due to the large variety of tactile and proprioceptive sensors available for integration with robotic grippers, the data structures for data collected on different robotic setups are different, which makes it difficult to compile and compare these datasets for robot learning. We propose “Touch Primitives”—a gripper-independent representation for the haptic exploration of object shapes which can be generalized across different gripper and sensor combinations. An exploration and grasping task is detailed to test the efficacy of the proposed touch primitive features.

A connectionist model of associating proprioceptive and tactile modalities in a humanoid robot

  • Autoři: Malinovská, K., Farkaš, I., Harvanová, J., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2022 IEEE International Conference on Development and Learning (ICDL). Piscataway: IEEE, 2022. p. 336-342. ISBN 978-1-6654-1311-4.
  • Rok: 2022
  • DOI: 10.1109/ICDL53763.2022.9962195
  • Odkaz: https://doi.org/10.1109/ICDL53763.2022.9962195
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Postnatal development in infants involves building the body schema based on integrating information from different modalities. An early phase of this complex process involves coupling proprioceptive inputs with tactile information during self-touch enabled by motor babbling. Such functionality is also desirable in humanoid robots that can serve as embodied instantiation of cognitive learning. We describe a simple connectionist model composed of neural networks that learns the proprioceptive-tactile representations on a simulated iCub humanoid robot. Input signals from both modalities – joint angles and touch stimuli on both upper limbs – are first self-organized in neural maps and then connected using a universal bidirectional associative network (UBAL). The model demonstrates the ability to predict touch and its location from proprioceptive information with relatively high accuracy. We also discuss limitations of the model and the ideas for future work.

A normative model of peripersonal space encoding as performing impact prediction

  • DOI: 10.1371/journal.pcbi.1010464
  • Odkaz: https://doi.org/10.1371/journal.pcbi.1010464
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Accurately predicting contact between our bodies and environmental objects is paramount to our evolutionary survival. It has been hypothesized that multisensory neurons responding both to touch on the body, and to auditory or visual stimuli occurring near them—thus delineating our peripersonal space (PPS)—may be a critical player in this computation. However, we lack a normative account (i.e., a model specifying how we ought to compute) linking impact prediction and PPS encoding. Here, we leverage Bayesian Decision Theory to develop such a model and show that it recapitulates many of the characteristics of PPS. Namely, a normative model of impact prediction (i) delineates a graded boundary between near and far space, (ii) demonstrates an enlargement of PPS as the speed of incoming stimuli increases, (iii) shows stronger contact prediction for looming than receding stimuli—but critically is still present for receding stimuli when observation uncertainty is non-zero—, (iv) scales with the value we attribute to environmental objects, and finally (v) can account for the differing sizes of PPS for different body parts. Together, these modeling results support the conjecture that PPS reflects the computation of impact prediction, and make a number of testable predictions for future empirical studies.

Active Visuo-Haptic Object Shape Completion

  • DOI: 10.1109/LRA.2022.3152975
  • Odkaz: https://doi.org/10.1109/LRA.2022.3152975
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Recent advancements in object shape completion have enabled impressive object reconstructions using only visual input. However, due to self-occlusion, the reconstructions have high uncertainty in the occluded object parts, which negatively impacts the performance of downstream robotic tasks such as grasping. In this letter, we propose an active visuo-haptic shape completion method called Act-VH that actively computes where to touch the objects based on the reconstruction uncertainty. Act-VH reconstructs objects from point clouds and calculates the reconstruction uncertainty using IGR, a recent state-of-the-art implicit surface deep neural network. We experimentally evaluate the reconstruction accuracy of Act-VH against five baselines in simulation and in the real world. We also propose a new simulation environment for this purpose. The results show that Act-VH outperforms all baselines and that an uncertainty-driven haptic exploration policy leads to higher reconstruction accuracy than a random policy and a policy driven by Gaussian Process Implicit Surfaces. As a final experiment, we evaluate Act-VH and the best reconstruction baseline on grasping 10 novel objects. The results show that Act-VH reaches a significantly higher grasp success rate than the baseline on all objects. Together, this letter opens up the door for using active visuo-haptic shape completion in more complex cluttered scenes.

Automatic self-contained calibration of an industrial dual-arm robot with cameras using self-contact, planar constraints, and self-observation

  • Autoři: Štěpánová, K., Rozlivek, J., Puciow, F., Krsek, P., Pajdla, T., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Robotics and Computer-Integrated Manufacturing. 2022, 73 ISSN 0736-5845.
  • Rok: 2022
  • DOI: 10.1016/j.rcim.2021.102250
  • Odkaz: https://doi.org/10.1016/j.rcim.2021.102250
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We present a robot kinematic calibration method that combines complementary calibration approaches: self-contact, planar constraints, and self-observation. We analyze the estimation of the end effector parameters, joint offsets of the manipulators, and calibration of the complete kinematic chain (DH parameters). The results are compared with ground truth measurements provided by a laser tracker. Our main findings are: (1) When applying the complementary calibration approaches in isolation, the self-contact approach yields the best and most stable results. (2) All combinations of more than one approach were always superior to using any single approach in terms of calibration errors and the observability of the estimated parameters. Combining more approaches delivers robot parameters that better generalize to the workspace parts not used for the calibration. (3) Sequential calibration, i.e. calibrating cameras first and then robot kinematics, is more effective than simultaneous calibration of all parameters. In real experiments, we employ two industrial manipulators mounted on a common base. The manipulators are equipped with force/torque sensors at their wrists, with two cameras attached to the robot base, and with special end effectors with fiducial markers. We collect a new comprehensive dataset for robot kinematic calibration and make it publicly available. The dataset and its analysis provide quantitative and qualitative insights that go beyond the specific manipulators used in this work and apply to self-contained robot kinematic calibration in general.

Biologically inspired robot body models and self-calibration

  • DOI: 10.1007/978-3-642-41610-1_201-1
  • Odkaz: https://doi.org/10.1007/978-3-642-41610-1_201-1
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Typically, mechanical design specifications provide the basis for a robot model and kinematic and dynamic mappings are constructed and remain fixed during operation. However, there are many sources of inaccuracies (e.g., assembly process, mechanical elasticity, friction). Furthermore, with the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and Computer-aided design (CAD) models provide a less accurate basis for the models. Humans, on the other hand, seamlessly control their complex bodies, adapt to growth or failures, and use tools. Exploiting multimodal sensory information plays a key part in these processes. In this chapter, differences between body representations in the brain and robot body models are established and the possibilities for learning robot models in biologically inspired ways are assessed.

Body models in humans and robots

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Longo, M.R.
  • Publikace: The Routledge Handbook of Bodily Awareness. Oxon: ROUTLEDGE JOURNALS, TAYLOR & FRANCIS LTD, 2022. p. 185-197. ISBN 978-0-367-33731-5.
  • Rok: 2022
  • DOI: 10.4324/9780429321542-18
  • Odkaz: https://doi.org/10.4324/9780429321542-18
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Humans excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools. These capabilities are also highly desirable in robots. They are displayed by machines to some extent – yet, as is so often the case, the artificial creatures are lagging behind. The key foundation is an internal representation of the body that the agent – human or robot – has developed. In the biological realm, evidence has been accumulated by diverse disciplines giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables control of the machine. In this chapter, we compare the character of body representations in biology with their robotic counterparts and relate that to the differences in performance that we observe. In some sense, robots have a lot in common with Ian Waterman – “the man who lost his body” – in that they rely on an explicit, veridical body model (body image taken to the extreme) and lack any implicit, multimodal representation (like the body schema) of their bodies. The core of this work is a detailed look at the somatoperceptual processing “pipeline” from inputs (tactile and proprioceptive afference, efferent commands), over “body representations” (superficial schema, postural schema, model of body size and shape), to perceptual processes like spatial localization of touch. A direct comparison with solutions to the same task in robots allows us to make important steps in converting this conceptual schematics into a computational model. As an additional aspect, we briefly look at the question of why robots do not experience body illusions. Finally, we discuss how robots can inform the biological sciences dealing with body representations and which of the features of the “body in the brain” should be transferred to robots, giving rise to more adaptive and resilient, self-calibrating machines.

Effect of Active and Passive Protective Soft Skins on Collision Forces in Human-robot Collaboration

  • DOI: 10.1016/j.rcim.2022.102363
  • Odkaz: https://doi.org/10.1016/j.rcim.2022.102363
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Soft electronic skins are one of the means to turn a classical industrial manipulator into a collaborative robot. For manipulators that are already fit for physical human–robot collaboration, soft skins can make them even safer. In this work, we study the after impact behavior of two collaborative manipulators (UR10e and KUKA LBR iiwa) and one classical industrial manipulator (KUKA Cybertech), in the presence or absence of an industrial protective skin (AIRSKIN). In addition, we isolate the effects of the passive padding and the active contribution of the sensor to robot reaction. We present a total of 2250 collision measurements and study the impact force, contact duration, clamping force, and impulse. This collected dataset is publicly available. We summarize our results as follows. For transient collisions, the passive skin properties lowered the impact forces by about 40 %. During quasi-static contact, the effect of skin covers – active or passive – cannot be isolated from the collision detection and reaction by the collaborative robots. Important effects of the stop categories triggered by the active protective skin were found. We systematically compare the different settings and compare the empirically established safe velocities with prescriptions by the ISO/TS 15066. In some cases, up to the quadruple of the ISO/TS 15066 prescribed velocity can comply with the impact force limits and thus be considered safe. We propose an extension of the formulas relating impact force and permissible velocity that take into account the stiffness and compressible thickness of the protective cover, leading to better predictions of the collision forces. At the same time, this work emphasizes the need for in situ measurements as all the factors we studied – presence of active/passive skin, safety stop settings, robot collision reaction, impact direction, and, of course, velocity – have effects on the force evolution after impact.

Functional Mode Switching for Safe and Efficient Human-Robot Interaction

  • Autoři: Švarný, P., Hamad, M., Kurdas, A., doc. Mgr. Matěj Hoffmann, Ph.D., Abdolshah, S., Haddadin, S.
  • Publikace: 2022 IEEE-RAS International Conference on Humanoid Robots (Humanoids). Piscataway, NJ: IEEE, 2022. p. 888-894. ISSN 2164-0580. ISBN 979-8-3503-0979-9.
  • Rok: 2022
  • DOI: 10.1109/Humanoids53995.2022.10000070
  • Odkaz: https://doi.org/10.1109/Humanoids53995.2022.10000070
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Various approaches can ascertain the safety and efficiency of physical human-robot collaboration. This paper presents the concept of robot functional mode switching to efficiently ensure human safety during collaborative tasks based on biomechanical pain and injury data and task information. Besides the robot's reflected inertial properties summarizing its impact dynamics, our concept also integrates safe and smooth velocity shaping that respects human partner motion, interaction type, and task knowledge. We further discuss different approaches to safely shape the robot velocity without sacrificing the overall task execution time and motion smoothness. The experimental results showed that our proposed approaches could decrease jerk level during functional mode switching and limit the impact of safety measures on productivity, especially when guided with additional task knowledge.

Gaze Cueing and the Role of Presence in Human-Robot Interaction

  • Autoři: Friebe, K., Samporová, S., Malinovská, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Social Robotics. Springer, Cham, 2022. p. 402-414. Lecture Notes in Computer Science. vol. 13817. ISSN 0302-9743. ISBN 978-3-031-24666-1.
  • Rok: 2022
  • DOI: 10.1007/978-3-031-24667-8_36
  • Odkaz: https://doi.org/10.1007/978-3-031-24667-8_36
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Gaze cueing is a fundamental part of social interactions, and broadly studied using Posner task based gaze cueing paradigms. While studies using human stimuli consistently yield a gaze cueing effect, results from studies using robotic stimuli are inconsistent. Typically, these studies use virtual agents or pictures of robots. As previous research has pointed to the significance of physical presence in human-robot interaction, it is of fundamental importance to understand its yet unexplored role in interactions with gaze cues. This paper investigates whether the physical presence of the iCub humanoid robot affects the strength of the gaze cueing effect in human-robot interaction. We exposed 42 participants to a gaze cueing task. We asked participants to react as quickly and accurately as possible to the appearance of a target stimulus that was either congruently or incongruently cued by the gaze of a copresent iCub robot or a virtual version of the same robot. Analysis of the reaction time measurements showed that participants were consistently affected by their robot interaction partner’s gaze, independently on the way the robot was presented. Additional analyses of participants’ ratings of the robot’s anthropomorphism, animacy and likeability further add to the impression that presence does not play a significant role in simple gaze based interactions. Together our findings open up interesting discussions about the possibility to generalize results from studies using virtual agents to real life interactions with copresent robots.

Hey, Robot! An Investigation of Getting Robot’s Attention Through Touch

  • Autoři: Lehmann, H., Rojík, A., Friebe, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Social Robotics. Springer, Cham, 2022. p. 388-401. Lecture Notes in Computer Science. vol. 13817. ISSN 0302-9743. ISBN 978-3-031-24666-1.
  • Rok: 2022
  • DOI: 10.1007/978-3-031-24667-8_35
  • Odkaz: https://doi.org/10.1007/978-3-031-24667-8_35
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Touch is a key part of interaction and communication between humans, but has still been little explored in human-robot interaction. In this work, participants were asked to approach and touch a humanoid robot on the hand (Nao – 26 participants; Pepper – 28 participants) to get its attention. We designed reaction behaviors for the robot that consisted in four different combinations of arm movements with the touched hand moving forward or back and the other hand moving forward or staying in place, with simultaneous leaning back, followed by looking at the participant. We studied which reaction of the robot people found the most appropriate and what was the reason for their choice. For both robots, the preferred reaction of the robot hand being touched was moving back. For the other hand, no movement at all was rated most natural for the Pepper, while it was movement forward for the Nao. A correlation between the anxiety subscale of the participants’ personality traits and the passive to active/aggressive nature of the robot reactions was found. Most participants noticed the leaning back and rated it positively. Looking at the participant was commented on positively by some participants in unstructured comments. We also analyzed where and how participants spontaneously touched the robot on the hand. In summary, the touch reaction behaviors designed here are good candidates to be deployed more generally in social robots, possibly including incidental touch in crowded environments. The robot size constitutes one important factor shaping how the robot reaction is perceived.

Human keypoint detection for close proximity human-robot interaction

  • DOI: 10.1109/Humanoids53995.2022.10000133
  • Odkaz: https://doi.org/10.1109/Humanoids53995.2022.10000133
  • Pracoviště: Skupina vizuálního rozpoznávání, Vidění pro roboty a autonomní systémy
  • Anotace:
    We study the performance of state-of-the-art human keypoint detectors in the context of close proximity human-robot interaction. The detection in this scenario is specific in that only a subset of body parts such as hands and torso are in the field of view. In particular, (i) we survey existing datasets with human pose annotation from the perspective of close proximity images and prepare and make publicly available a new Human in Close Proximity (HiCP) dataset; (ii) we quantitatively and qualitatively compare state-of-the-art human whole-body 2D keypoint detection methods (OpenPose, MMPose, AlphaPose, Detectron2) on this dataset; (iii) since accurate detection of hands and fingers is critical in applications with handovers, we evaluate the performance of the MediaPipe hand detector; (iv) we deploy the algorithms on a humanoid robot with an RGB-D camera on its head and evaluate the performance in 3D human keypoint detection. A motion capture system is used as reference. The best performing whole-body keypoint detectors in close proximity were MMPose and AlphaPose, but both had difficulty with finger detection. Thus, we propose a combination of MMPose or AlphaPose for the body and MediaPipe for the hands in a single framework providing the most accurate and robust detection. We also analyse the failure modes of individual detectors---for example, to what extent the absence of the head of the person in the image degrades performance. Finally, we demonstrate the framework in a scenario where a humanoid robot interacting with a person uses the detected 3D keypoints for whole-body avoidance maneuvers.

Learning to reach to own body from spontaneous self-touch using a generative model

  • DOI: 10.1109/ICDL53763.2022.9962186
  • Odkaz: https://doi.org/10.1109/ICDL53763.2022.9962186
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    When leaving the aquatic constrained environment of the womb, newborns are thrown into the world with essentially new laws and regularities that govern their interactions with the environment. Here, we study how spontaneous self-contacts can provide material for learning implicit models of the body and its action possibilities in the environment. Specifically, we investigate the space of only somatosensory (tactile and proprioceptive) activations during self-touch configurations in a simple model agent. Using biologically motivated overlapping receptive fields in these modalities, a variational autoencoder (VAE) in a denoising framework is trained on these inputs. The denoising properties of the VAE can be exploited to fill in the missing information. In particular, if tactile stimulation is provided on a single body part, the model provides a configuration that is closer to a previously experienced self-contact configuration. Iterative passes through the VAE reconstructions create a control loop that brings about reaching for stimuli on the body. Furthermore, due to the generative properties of the model, previously unsampled proprioceptive-tactile configurations can also be achieved. In the future, we will seek a closer comparison with empirical data on the kinematics of spontaneous self-touch in infants and the results of reaching for stimuli on the body.

Recognizing object surface material from impact sounds for robot manipulation

  • Autoři: Dimiccoli, M., Shubhan Parag Patni, MSc., doc. Mgr. Matěj Hoffmann, Ph.D., Moreno-Noguer, F.
  • Publikace: Intelligent Robots and Systems (IROS), 2022 IEEE/RSJ International Conference on. Piscataway: IEEE, 2022. p. 9280-9287. ISSN 2153-0866. ISBN 978-1-6654-7927-1.
  • Rok: 2022
  • DOI: 10.1109/IROS47612.2022.9981578
  • Odkaz: https://doi.org/10.1109/IROS47612.2022.9981578
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We investigated the use of impact sounds generated during exploratory behaviors in a robotic manipulation setup as cues for predicting object surface material and for recognizing individual objects. We collected and make available the YCB-impact sounds dataset which includes over 3,500 impact sounds for the YCB set of everyday objects lying on a table. Impact sounds were generated in three modes: (i) human holding a gripper and hitting, scratching, or dropping the object; (ii) gripper attached to a teleoperated robot hitting the object from the top; (iii) autonomously operated robot hitting the objects from the side with two different speeds. A convolutional neural network (ResNet34) is trained from scratch to recognize the object material (steel, aluminium, hard plastic, soft plastic, other plastic, ceramic, wood, paper/cardboard, foam, glass, rubber) from a single impact sound. On the manually collected dataset with more variability in the action, nearly 60\% accuracy for the test set (unseen objects) was achieved. On a robot setup and a stereotypical poking action from top, accuracy of 85% was achieved. This performance drops to 79% if multiple exploratory actions are combined. Individual objects from the set of 75 objects can be recognized with a 79% accuracy. This work demonstrates promising results regarding the possibility of using sound for recognition in tasks like single-stream recycling where objects have to be sorted based on their material composition.

Self-touch and other spontaneous behavior patterns in early infancy

  • DOI: 10.1109/ICDL53763.2022.9962203
  • Odkaz: https://doi.org/10.1109/ICDL53763.2022.9962203
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Children are not born tabula rasa. However, interacting with the environment through their body movements in the first months after birth is critical to building the models or representations that are the foundation for everything that follows. We present longitudinal data on spontaneous behavior of three infants observed between about 8 and 25 weeks of age in supine position. We combined manual scoring of video recordings with an automatic extraction of motion data in order to study infants’ behavioral patterns and developmental progression such as: (i) spatial distribution of self-touches on the body, (ii) spatial patterns and regularities of hand movements, (iii) midline crossing, (iv) preferential use of one arm, and (v) dynamic patterns of movements indicative of goal-directedness. From the patterns observed in this pilot data set, we can speculate on the development of first body and peripersonal space representations. Several methods of extracting 3D kinematics from videos have recently been made available by the computer vision community. We applied one of these methods on infant videos and provide guidelines on its possibilities and limitations—a methodological contribution to automating the analysis of infant videos. In the future, we plan to use the patterns we extracted from the recordings as inputs to embodied computational models of learning of body representations in infancy.

3D Collision-Force-Map for Safe Human-Robot Collaboration

  • DOI: 10.1109/ICRA48506.2021.9561845
  • Odkaz: https://doi.org/10.1109/ICRA48506.2021.9561845
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The need to guarantee safety of collaborative robots limits their performance, in particular, their speed and hence cycle time. The standard ISO/TS 15066 defines the Power and Force Limiting operation mode and prescribes force thresholds that a moving robot is allowed to exert on human body parts during impact, along with a simple formula to obtain maximum allowed speed of the robot in the whole workspace. In this work, we measure the forces exerted by two collaborative manipulators (UR10e and KUKA LBR iiwa) moving downward against an impact measuring device. First, we empirically show that the impact forces can vary by more than 100 percent within the robot workspace. The forces are negatively correlated with the distance from the robot base and the height in the workspace. Second, we present a data-driven model, 3D Collision-Force-Map, predicting impact forces from distance, height, and velocity and demonstrate that it can be trained on a limited number of data points. Third, we analyze the force evolution upon impact and find that clamping never occurs for the UR10e. We show that formulas relating robot mass, velocity, and impact forces from ISO/TS 15066 are insufficient—leading both to significant underestimation and overestimation and thus to unnecessarily long cycle times or even dangerous applications. We propose an empirical method that can be deployed to quickly determine the optimal speed and position where a task can be safely performed with maximum efficiency.

Body models in humans, animals, and robots: mechanisms and plasticity

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Body Schema and Body Image: New Directions. Oxford: Oxford University Press, 2021. p. 152-180. ISBN 9780198851721.
  • Rok: 2021
  • DOI: 10.1093/oso/9780198851721.003.0010
  • Odkaz: https://doi.org/10.1093/oso/9780198851721.003.0010
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth or failures, or using tools. The key foundation is an internal representation of the body that the agent—human, animal, or robot—has developed. In the biological realm, evidence has been accumulating in diverse disciplines, giving rise to the concepts of body image, body schema, and others. In robotics, a model of the robot is an indispensable component that enables to control the machine. This chapter compares the character of body representations in biology with their robotic counterparts and relates that to the differences in performance observed. Conclusions are drawn about how robots can inform the biological sciences dealing with body representations and which of the features of the ‘body in the brain’ should be transferred to robots, giving rise to more adaptive and resilient self-calibrating machines.

Embodied Reasoning for Discovering Object Properties via Manipulation

  • Autoři: Behrens, J., Nazarczuk, M., Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D., Demiris, Y., Mikolajczyk, K.
  • Publikace: IEEE International Conference on Robotics and Automation (ICRA). IEEE Xplore, 2021. p. 10139-10145. ISSN 2577-087X. ISBN 978-1-7281-9077-8.
  • Rok: 2021
  • DOI: 10.1109/ICRA48506.2021.9561212
  • Odkaz: https://doi.org/10.1109/ICRA48506.2021.9561212
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    In this paper, we present an integrated system that includes reasoning from visual and natural language inputs, action and motion planning, executing tasks by a robotic arm, manipulating objects, and discovering their properties. A vision to action module recognises the scene with objects and their attributes and analyses enquiries formulated in natural language. It performs multi-modal reasoning and generates a sequence of simple actions that can be executed by a robot. The scene model and action sequence are sent to a planning and execution module that generates a motion plan with collision avoidance, simulates the actions, and executes them. We use synthetic data to train various components of the system and test on a real robot to show the generalization capabilities. We focus on a tabletop scenario with objects that can be grasped by our embodied agent i.e. a 7DoF manipulator with a two-finger gripper. We evaluate the agent on 60 representative queries repeated 3 times (e.g., ’Check what is on the other side of the soda can’) concerning different objects and tasks in the scene. We perform experiments in a simulated and real environment and report the success rate for various components of the system. Our system achieves up to 80.6% success rate on challenging scenes and queries. We also analyse and discuss the challenges that such an intelligent embodied system faces.

Multisensorial robot calibration framework and toolbox

  • Autoři: Rozlivek, J., Ing. Lukáš Rustler, Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids). Piscataway: IEEE, 2021. p. 459-466. ISSN 2164-0580. ISBN 978-1-7281-9372-4.
  • Rok: 2021
  • DOI: 10.1109/HUMANOIDS47582.2021.9555803
  • Odkaz: https://doi.org/10.1109/HUMANOIDS47582.2021.9555803
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The accuracy of robot models critically impacts their performance. With the advent of collaborative, social, or soft robots, the stiffness of the materials and the precision of the manufactured parts drops and CAD models provide a less accurate basis for the models. On the other hand, the machines often come with a rich set of powerful yet inexpensive sensors, which opens up the possibility for self-contained calibration approaches that can be performed autonomously and repeatedly by the robot. In this work, we extend the theory dealing with robot kinematic calibration by incorporating new sensory modalities (e.g., cameras on the robot, whole-body tactile sensors), calibration types, and their combinations. We provide a unified formulation that makes it possible to combine traditional approaches (external laser tracker, constraints from contact with the external environment) with self-contained calibration available to humanoid robots (self-observation, self-contact) in a single framework and single cost function. Second, we present an open source toolbox for Matlab that provides this functionality, along with additional tools for preprocessing (e.g., dataset visualization) and evaluation (e.g., observability/identifiability). We illustrate some of the possibilities of this tool through calibration of two humanoid robots (iCub, Nao) and one industrial manipulator (dual-arm setup with Yaskawa-Motoman MA1400).

Robot in the Mirror: Toward an Embodied Computational Model of Mirror Self-Recognition

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Wang, S., Outrata, V., Alzueta, E., Lanillos, P.
  • Publikace: KI - Künstliche Intelligenz, German Journal on Artificial Intelligence. 2021, 35(1), 37-51. ISSN 0933-1875.
  • Rok: 2021
  • DOI: 10.1007/s13218-020-00701-7
  • Odkaz: https://doi.org/10.1007/s13218-020-00701-7
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Self-recognition or self-awareness is a capacity attributed typically only to humans and few other species. The definitions of these concepts vary and little is known about the mechanisms behind them. However, there is a Turing test-like benchmark: the mirror self-recognition, which consists in covertly putting a mark on the face of the tested subject, placing her in front of a mirror, and observing the reactions. In this work, first, we provide a mechanistic decomposition, or process model, of what components are required to pass this test. Based on these, we provide suggestions for empirical research. In particular, in our view, the way the infants or animals reach for the mark should be studied in detail. Second, we develop a model to enable the humanoid robot Nao to pass the test. The core of our technical contribution is learning the appearance representation and visual novelty detection by means of learning the generative model of the face with deep auto-encoders and exploiting the prediction error. The mark is identified as a salient region on the face and reaching action is triggered, relying on a previously learned mapping to arm joint angles. The architecture is tested on two robots with completely different face.

Spatial calibration of whole-body artificial skin on a humanoid robot: comparing self-contact, 3D reconstruction, and CAD-based calibration

  • Autoři: Ing. Lukáš Rustler, Potočná, B., Polic, M., Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids). Piscataway: IEEE, 2021. p. 445-452. ISSN 2164-0580. ISBN 978-1-7281-9372-4.
  • Rok: 2021
  • DOI: 10.1109/HUMANOIDS47582.2021.9555806
  • Odkaz: https://doi.org/10.1109/HUMANOIDS47582.2021.9555806
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.

Active exploration for body model learning through self-touch on a humanoid robot with artificial skin

  • Autoři: Ing. Filipe Gama, Shcherban, M., Rolf, M., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2020 Joint IEEE 10th International Conference on. Piscataway: IEEE, 2020. ISSN 2161-9484. ISBN 978-1-7281-7306-1.
  • Rok: 2020
  • DOI: 10.1109/ICDL-EpiRob48136.2020.9278035
  • Odkaz: https://doi.org/10.1109/ICDL-EpiRob48136.2020.9278035
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The mechanisms of infant development are far from understood. Learning about one's own body is likely a foundation for subsequent development. Here we look specifically at the problem of how spontaneous touches to the body in early infancy may give rise to first body models and bootstrap further development such as reaching competence. Unlike visually elicited reaching, reaching to own body requires connections of the tactile and motor space only, bypassing vision. Still, the problems of high dimensionality and redundancy of the motor system persist. In this work, we present an embodied computational model on a simulated humanoid robot with artificial sensitive skin on large areas of its body. The robot should autonomously develop the capacity to reach for every tactile sensor on its body. To do this efficiently, we employ the computational framework of intrinsic motivations and variants of goal babbling-as opposed to motor babbling-that prove to make the exploration process faster and alleviate the ill-posedness of learning inverse kinematics. Based on our results, we discuss the next steps in relation to infant studies: what information will be necessary to further ground this computational model in behavioral data.

Touching a Human or a Robot? Investigating Human-likeness of a Soft Warm Artificial Hand

  • Autoři: Ueno, A., Hlaváč, V., Mizuuchi, I., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2020 29th IEE International Conference on Robot and Human Interactive Communication (RO-MAN). Piscataway: IEEE, 2020. p. 14-20. ISSN 1944-9437. ISBN 978-1-7281-6075-7.
  • Rok: 2020
  • DOI: 10.1109/RO-MAN47096.2020.9223523
  • Odkaz: https://doi.org/10.1109/RO-MAN47096.2020.9223523
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    With the advent of different electronic skins sensitive to touch and robots composed of soft materials, tactile or haptic human-robot interaction is gaining importance. We designed a highly realistic artificial hand aiming to reproduce human-to-human physical contact through a special morphology imitating flesh and bones and a heating system imitating human body temperature. The mechanical response properties of different finger designs were analyzed and the most mimetic one came very close to a human finger. We designed three experiments with participants using haptic exploration to evaluate the human-likeness of: (1) finger morphologies; (2) complete hands: real human vs. soft and warm artificial hand vs. rubber hand (3) the hand mounted on a manipulator with fixed vs. passive compliant wrist in a handshake scenario. First, participants find the mimetic finger morphology most humanlike. Second, people can reliably distinguish the real human hand, the artificial one, and a rubber hand. In terms of humanlikeness (Anthropomorphism, Animacy, and Likeability), the human hand scores better than the artificial hand which in turn clearly outperforms the rubber hand. The temperature, or "warmth", was rated as the most human-like feature of the artificial hand.

Collision Preventing Phase-Progress Control for Velocity Adaptation in Human-Robot Collaboration

  • Autoři: Zardykhan, D., Švarný, P., doc. Mgr. Matěj Hoffmann, Ph.D., Shahriari, E., Haddadin, S.
  • Publikace: 2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids). Piscataway, NJ: IEEE, 2019. p. 266-273. ISSN 2164-0580. ISBN 978-1-5386-7630-1.
  • Rok: 2019
  • DOI: 10.1109/Humanoids43949.2019.9035065
  • Odkaz: https://doi.org/10.1109/Humanoids43949.2019.9035065
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    As robots are leaving dedicated areas on the factory floor and start to share workspaces with humans, safety of such collaboration becomes a major challenge. In this work, we propose new approaches to robot velocity modulation: while the robot is on a path prescribed by the task, it predicts possible collisions with the human and gradually slows down, proportionally to the danger of collision. Two principal approaches are developed—Impulse Orb and Prognosis Window—that dynamically determine the possible robot-induced collisions and apply a novel velocity modulating approach, in which the phase progress of the robot trajectory is modulated while the desired robot path remains intact. The methods guarantee that the robot will halt before contacting the human, but they are less conservative and more flexible than solutions using reduced speed and complete stop only, thereby increasing the effectiveness of human-robot collaboration. This approach is especially useful in constrained setups where the robot path is prescribed. Speed modulation is smooth and does not lead to abrupt motions, making the behavior of the robot also better understandable for the human counterpart. The two principal methods under different parameter settings are experimentally validated in a human-robot interaction scenario with the Franka Emika Panda robot, an external RGB-D camera, and human keypoint detection using OpenPose.

Development of Infant Reaching Strategies to Tactile Targets on the Face

  • DOI: 10.3389/fpsyg.2019.00009
  • Odkaz: https://doi.org/10.3389/fpsyg.2019.00009
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Infant development of reaching to tactile targets on the skin has been studied little, despite its daily use during adaptive behaviors such as removing foreign stimuli or scratching an itch. We longitudinally examined the development of infant reaching strategies (from just under 2 to 11 months) approximately every other week with a vibrotactile stimulus applied to eight different locations on the face (left/right/center temple, left/right ear, left/right mouth corners, and chin). Successful reaching for the stimulus uses tactile input and proprioception to localize the target and move the hand to it. We studied the developmental progression of reaching and grasping strategies. As infants became older the likelihood of using the hand to reach to the target - versus touching the target with another body part or surface such as the upper arm or chair - increased. For trials where infants reached to the target with the hand, infants also refined their hand postures with age. As infants became older, they made fewer contacts with a closed fist or the dorsal part of the hand and more touches/grasps with the fingers or palm. Results suggest that during the first year infants become able to act more precisely on tactile targets on the face.

Jak číst standard(y) a něco si z toho vzít

  • Autoři: Švarný, P., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Kognícia a umelý život 2019. Univerzita Komenského v Bratislave, 2019. p. 116-118. ISBN 978-80-223-4720-4.
  • Rok: 2019
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Průmyslová odvětví bývají doprovázena řadou norem či standardů, které umožňují regulovat dané odvětví. Přestože má norma sloužit jako návod pro pracovníky z oboru, sama je často psaná spíše jazykem právníků než uživatelů. Z tohoto důvodu a také pro svou vysokou cenu mohou normy připomínat spíše bedlivě střežená tajemství než snahu komunikovat obsah srozumitelně. Sami se zabýváme normou pro kolaborativní robotiku, ISO/TS 15066, s kterou jsme se pro naši práci museli podrobně seznámit. V tomto příspěvku představujeme základní poznatky o normách a rady jak je převést do praxe. Pracujeme s normou pro kolaborativní roboty, ale rady jsou platné obecně, nejen normy, které jsou vydávány ISO nebo se týkají robotů (např. pro psychology ISO 10667-1).

Learning a peripersonal space representation using Conditional Restricted Boltzmann Machine

  • Autoři: Straka, Z., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Kognícia a umelý život 2019. Univerzita Komenského v Bratislave, 2019. p. 104-105. ISBN 978-80-223-4720-4.
  • Rok: 2019
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We present a neural network learning architecture composed of a Restricted Boltzmann Machine (RBM) and a Conditional RBM (CRBM) that performs multisen- sory integration and prediction, motivated by the problem of learning a representation of defensive peripersonal space. This work follows up on our previous work (Straka and Hoffmann 2017) where we proposed a network composed of a RBM and a feedforward neural network (FFNN). In this work, with a similar 2D simulated scenario, we sought to replace the FFNN with an RBM-like module and opted for the CRBM which is responsible for making a temporal prediction. We demonstrate that the new architecture is capable of learning to map from visual and tactile inputs at a previous time step (without tactile activation) to future activations with the visual stimulus at the “skin” and corresponding tactile activation, including the confidence of the predictions.

Reaching development through visuo-proprioceptive-tactile integration on a humanoid robot - A deep learning approach

  • Autoři: Nguyen, P.D.H., doc. Mgr. Matěj Hoffmann, Ph.D., Pattacini, U., Metta, G.
  • Publikace: Proceedings of the 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). Anchorage, Alaska: IEEE, 2019. p. 163-170. ISSN 2161-9484. ISBN 978-1-5386-8128-2.
  • Rok: 2019
  • DOI: 10.1109/DEVLRN.2019.8850681
  • Odkaz: https://doi.org/10.1109/DEVLRN.2019.8850681
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The development of reaching in infants has been studied for nearly nine decades. Originally, it was thought that early reaching is visually guided, but more recent evidence is suggestive of 'visually elicited' reaching, i.e. infant is gazing at the object rather than its hand during the reaching movement. The importance of haptic feedback has also been emphasized. Inspired by these findings, in this work we use the simulated iCub humanoid robot to construct a model of reaching development. The robot is presented with different objects, gazes at them, and performs motor babbling with one of its arms. Successful contacts with the object are detected through tactile sensors on hand and forearm. Such events serve as the training set, constituted by images from the robot's two eyes, head joints, tactile activation, and arm joints. A deep neural network is trained with images and head joints as inputs and arm configuration and touch as output. After learning, the network can successfully infer arm configurations that would result in a successful reach, together with prediction of tactile activation (i.e. which body part would make contact). Our main contribution is twofold: (i) our pipeline is end-to-end from stereo images and head joints (6 DoF) to armtorso configurations (10 DoF) and tactile activations, without any preprocessing, explicit coordinate transformations etc.; (ii) unique to this approach, reaches with multiple effectors corresponding to different regions of the sensitive skin are possible.

Reaching with one arm to the other: Coordinating touch, proprioception, and action during infancy

  • Autoři: Chinn, L.K., doc. Mgr. Matěj Hoffmann, Ph.D., Leed, J.E., Lockman, J.J.
  • Publikace: Journal of Experimental Child Psychology. 2019, 183 19-32. ISSN 0022-0965.
  • Rok: 2019
  • DOI: 10.1016/j.jecp.2019.01.014
  • Odkaz: https://doi.org/10.1016/j.jecp.2019.01.014
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Reaching to target locations on the body has been studied little despite its importance for adaptive behaviors such as feeding, grooming, and indicating a source of discomfort. This behavior requires multisensory integration given that it involves coordination of touch, proprioception, and sometimes vision as well as action. Here we examined the origins of this skill by investigating how infants begin to localize targets on the body and the motor strategies by which they do so. Infants (7-21 months of age) were prompted to reach to a vibrating target placed at five arm/hand locations (elbow, crook of elbow, forearm, palm, and top of hand) one by one. To manually localize the target, infants needed to reach with one arm to the other. Results suggest that coordination increases with age in the strategies that infants used to localize body targets. Most infants showed bimanual coordination and usually moved the target arm toward the reaching arm to assist reaching. Furthermore, intersensory coordination increased with age. Simultaneous movements of the two arms increased with age, as did coordination between vision and reaching. The results provide new information about the development of multisensory integration during tactile localization and how such integration is linked to action. (C) 2019 Published by Elsevier Inc.

Robot Self-Calibration Using Multiple Kinematic Chains-A Simulation Study on the iCub Humanoid Robot

  • Autoři: Štěpánová, K., Pajdla, T., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: IEEE Robotics and Automation Letters. 2019, 4(2), 1900-1907. ISSN 2377-3766.
  • Rok: 2019
  • DOI: 10.1109/LRA.2019.2898320
  • Odkaz: https://doi.org/10.1109/LRA.2019.2898320
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Mechanism calibration is an important and nontrivial task in robotics. Advances in sensor technology make affordable but increasingly accurate devices such as cameras and tactile sensors available, making it possible to perform automated self-contained calibration relying on redundant information in these sensory streams. In this letter, we use a simulated iCub humanoid robot with a stereo camera system and end-effector contact emulation to quantitatively compare the performance of kinematic calibration by employing different combinations of intersecting kinematic chains-either through self-observation or self-touch. The parameters varied were as follows: first, type and number of intersecting kinematic chains used for calibration, second, parameters and chains subject to optimization, third, amount of initial perturbation of kinematic parameters, fourth, number of poses/configurations used for optimization, and fifth, amount of measurement noise in end-effector positions/cameras. The main findings are as follows: 1) calibrating parameters of a single chain (e.g., one arm) by employing multiple kinematic chains ("self-observation" and "self-touch") is superior in terms of optimization results as well as observability; 2) when using multichain calibration, fewer poses suffice to get similar performance compared to when, for example, only observation from a single camera is used; 3) parameters of all chains (here 86 DH parameters) can be subject to calibration simultaneously and with 50 (100) poses, end-effector error of around 2 (1) mm can be achieved; and 4) adding noise to a sensory modality degrades performance of all calibrations employing the chains relying on this information.

Safe physical HRI: Toward a unified treatment of speed and separation monitoring together with power and force limiting

  • Autoři: Švarný, P., Tesař, M., Behrens, J., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Piscataway, NJ: IEEE, 2019. p. 7580-7587. ISSN 2153-0866. ISBN 978-1-7281-4004-9.
  • Rok: 2019
  • DOI: 10.1109/IROS40897.2019.8968463
  • Odkaz: https://doi.org/10.1109/IROS40897.2019.8968463
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    So-called collaborative robots are a current trend in industrial robotics. However, they still face many problems in practical application such as reduced speed to ascertain their collaborativeness. The standards prescribe two regimes: (i) speed and separation monitoring and (ii) power and force limiting, where the former requires reliable estimation of distances between the robot and human body parts and the latter imposes constraints on the energy absorbed during collisions prior to robot stopping. Following the standards, we deploy the two collaborative regimes in a single application and study the performance in a mock collaborative task under the individual regimes, including transitions between them. Additionally, we compare the performance under ``safety zone monitoring'' with keypoint pair-wise separation distance assessment relying on an RGB-D sensor and skeleton extraction algorithm to track human body parts in the workspace. Best performance has been achieved in the following setting: robot operates at full speed until a distance threshold between any robot and human body part is crossed; then, reduced robot speed per power and force limiting is triggered. Robot is halted only when the operator's head crosses a predefined distance from selected robot parts. We demonstrate our methodology on a setup combining a KUKA LBR iiwa robot, Intel RealSense RGB-D sensor and OpenPose for human pose estimation.

Symbol Emergence in Cognitive Developmental Systems: A Survey

  • Autoři: Taniguchi, T., Ugur, E., doc. Mgr. Matěj Hoffmann, Ph.D., Piater, J., Worgotter, F.
  • Publikace: IEEE Transactions on Cognitive and Developmental Systems. 2019, 11(4), 494-516. ISSN 2379-8920.
  • Rok: 2019
  • DOI: 10.1109/TCDS.2018.2867772
  • Odkaz: https://doi.org/10.1109/TCDS.2018.2867772
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Humans use signs, e.g., sentences in a spoken language, for communication and thought. Hence, symbol systems like language are crucial for our communication with other agents and adaptation to our real-world environment. The symbol systems we use in our human society adaptively and dynamically change over time. In the context of artificial intelligence (AI) and cognitive systems, the symbol grounding problem has been regarded as one of the central problems related to symbols . However, the symbol grounding problem was originally posed to connect symbolic AI and sensorimotor information and did not consider many interdisciplinary phenomena in human communication and dynamic symbol systems in our society, which semiotics considered. In this paper, we focus on the symbol emergence problem, addressing not only cognitive dynamics but also the dynamics of symbol systems in society, rather than the symbol grounding problem. We first introduce the notion of a symbol in semiotics from the humanities, to leave the very narrow idea of symbols in symbolic AI. Furthermore, over the years, it became more and more clear that symbol emergence has to be regarded as a multifaceted problem. Therefore, second, we review the history of the symbol emergence problem in different fields, including both biological and artificial systems, showing their mutual relations. We summarize the discussion and provide an integrative viewpoint and comprehensive overview of symbol emergence in cognitive systems. Additionally, we describe the challenges facing the creation of cognitive systems that can be part of symbol emergence systems.

Compact Real-time Avoidance on a Humanoid Robot for Human-robot Interaction

  • Autoři: Nguyen, D.H.P., doc. Mgr. Matěj Hoffmann, Ph.D., Roncone, A., Pattacini, U., Metta, G.
  • Publikace: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. USA: IEEE Computer Society, 2018. p. 416-424. ISSN 2167-2148. ISBN 978-1-4503-4953-6.
  • Rok: 2018
  • DOI: 10.1145/3171221.3171245
  • Odkaz: https://doi.org/10.1145/3171221.3171245
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    With robots leaving factories and entering less controlled domains, possibly sharing the space with humans, safety is paramount and multimodal awareness of the body surface and the surrounding environment is fundamental. Taking inspiration from peripersonal space representations in humans, we present a framework on a humanoid robot that dynamically maintains such a protective safety zone, composed of the following main components: (i) a human 2D keypoints estimation pipeline employing a deep learning based algorithm, extended here into 3D using disparity; (ii) a distributed peripersonal space representation around the robot»s body parts; (iii) a reaching controller that incorporates all obstacles entering the robot»s safety zone on the fly into the task. Pilot experiments demonstrate that an effective safety margin between the robot's and the human's body parts is kept. The proposed solution is flexible and versatile since the safety zone around individual robot and human body parts can be selectively modulated---here we demonstrate stronger avoidance of the human head compared to rest of the body. Our system works in real time and is self-contained, with no external sensory equipment and use of onboard cameras only.

DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

  • Autoři: Moulin-Frier, C., Fischer, T., Petit, M., Pointeau, G., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: IEEE Transactions on Cognitive and Developmental Systems. 2018, 10(4), 1005-1022. ISSN 2379-8920.
  • Rok: 2018
  • DOI: 10.1109/TCDS.2017.2754143
  • Odkaz: https://doi.org/10.1109/TCDS.2017.2754143
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both human and robot. The framework, based on a biologically grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the-art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users.

Merging physical and social interaction for effective human-robot collaboration

  • Autoři: Nguyen, P.D.H., Bottarel, F., Pattacini, U., doc. Mgr. Matěj Hoffmann, Ph.D., Natale, L., Metta, G.
  • Publikace: Humanoid Robots (Humanoids), 2018 IEEE-RAS 18th International Conference on. Piscataway, NJ: IEEE, 2018. p. 710-717. ISSN 2164-0580. ISBN 978-1-5386-7283-9.
  • Rok: 2018
  • DOI: 10.1109/HUMANOIDS.2018.8625030
  • Odkaz: https://doi.org/10.1109/HUMANOIDS.2018.8625030
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    For robots to share the environment and cooperate with humans without barriers, we need to guarantee safety to the operator and, simultaneously, to maximize the robot’s usability. Safety is typically guaranteed by controlling the robot movements while, possibly, taking into account physical contacts with the operator, objects or tools. If possible, also the safety of the robot must be guaranteed. Not less importantly, as the complexity of the robots and their skills increase, usability becomes a concern. Social interaction technologies can save the day by enabling natural human-robot collaboration. In this paper we show a possible integration of physical and social Human-Robot Interaction methods (pHRI and sHRI respectively). Our reference task is object hand-over. We test both the case of the robot initiating the action and, vice versa, the robot receiving an object from the operator. Finally, we discuss possible extension with higher-level planning systems for added flexibility and reasoning skills.

Robotic homunculus: Learning of artificial skin representation in a humanoid robot motivated by primary somatosensory cortex

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Straka, Z., Farkas, I., Vavrečka, M., Metta, G.
  • Publikace: IEEE Transactions on Cognitive and Developmental Systems. 2018, 10(2), 163-176. ISSN 2379-8920.
  • Rok: 2018
  • DOI: 10.1109/TCDS.2017.2649225
  • Odkaz: https://doi.org/10.1109/TCDS.2017.2649225
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Using the iCub humanoid robot with an artificial pressure-sensitive skin, we investigate how representations of the whole skin surface resembling those found in primate primary somatosensory cortex can be formed from local tactile stimulations traversing the body of the physical robot. We employ the well-known self-organizing map algorithm and introduce its modification that makes it possible to restrict the maximum receptive field (MRF) size of neuron groups at the output layer. This is motivated by findings from biology where basic somatotopy of the cortical sheet seems to be prescribed genetically and connections are localized to particular regions. We explore different settings of the MRF and the effect of activity-independent (input-output connections constraints implemented by MRF) and activity-dependent (learning from skin stimulations) mechanisms on the formation of the tactile map. The framework conveniently allows one to specify prior knowledge regarding the skin topology and thus to effectively seed a particular representation that training shapes further. Furthermore, we show that the MRF modification facilitates learning in situations when concurrent stimulation at nonadjacent places occurs (“multitouch”). The procedure was sufficiently robust and not intensive on the data collection and can be applied to any robots where representation of their “skin” is desirable.

Robots as powerful allies for the study of embodied cognition from the bottom up

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Pfeifer, R.
  • Publikace: The Oxford Handbook 4e Cognition. Oxford: Oxford University Press, 2018. ISBN 978-0-19-873541-0.
  • Rok: 2018
  • DOI: 10.1093/oxfordhb/9780198735410.013.45
  • Odkaz: https://doi.org/10.1093/oxfordhb/9780198735410.013.45
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    A large body of compelling evidence has been accumulated demonstrating that embodiment—the agent’s physical setup, including its shape, materials, sensors, and actuators—is constitutive for any form of cognition, and, as a consequence, models of cognition need to be embodied. In contrast to methods from empirical sciences to study cognition, robots can be freely manipulated and virtually all key variables of their embodiment and control programs can be systematically varied. As such, they provide an extremely powerful tool of investigation. We present a robotic, bottom-up, or developmental approach, focusing on three stages: (1) low-level behaviors like walking and reflexes, (2) learning regularities in sensorimotor spaces, and (3) human-like cognition. We also show that robotic-based research is not only a productive path to deepening our understanding of cognition, but that robots can strongly benefit from human-like cognition in order to become more autonomous, robust, resilient, and safe.

Safety of human-robot interaction through tactile sensors and peripersonal space representations

  • Autoři: Švarný, P., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Kognice a umělý život 2018. Brno: FLOW, 2018. p. 73-75. ISBN 978-80-88123-24-8.
  • Rok: 2018
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Human-robot collaboration including close physical human-robot interaction (pHRI) is a current trend in industry and also science. The safety guidelines prescribe two modes of safety: (i) power and force limitation and (ii) speed and separation monitoring. We examine the potential of robots equipped with artificial sensitive skin and a protective safety zone around it (peripersonal space) to safe pHRI.

Toward safe separation distance monitoring from RGB-D sensors in human-robot interaction

  • Autoři: Švarný, P., Straka, Z., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Proceedings of the international PhD conference on safe and social robots. Strasbourg: Commission of the European Communities, 2018. p. 11-14.
  • Rok: 2018
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The interaction of humans and robots in less constrained environments gains a lot of attention lately and safety of such interaction is of utmost importance. Two ways of risk assessment are prescribed by recent safety standards: (i) power and force limiting and (ii) speed and separation monitoring. Unlike typical solutions in industry that are restricted to mere safety zone monitoring, we present a framework that realizes separation distance monitoring between a robot and a human operator in a detailed, yet versatile, transparent, and tunable fashion. The separation distance is assessed pair-wise for all keypoints on the robot and the human body and as such can be selectively modified to account for specific conditions. The operation of this framework is illustrated on a Nao humanoid robot interacting with a human partner perceived by a RealSense RGB-D sensor and employing the OpenPose human skeleton estimation algorithm.

Versatile distance measurement between robot and human key points using RGB-D sensors for safe HRI

  • Autoři: Švarný, P., Straka, Z., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: 1st Workshop on Proximity Perception in Robotics at IROS 2018. KIT Scientific Publishing, 2018.
  • Rok: 2018
  • DOI: 10.5445/IR/1000086870
  • Odkaz: https://doi.org/10.5445/IR/1000086870
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The safety of collaborative robots’ and human interaction can be guaranteed in two main ways: (i) power and force limiting and (ii) speed and separation monitoring. We present a framework that realises separation distance monitoring between a robot and a human operator based on key point pair-wise evaluation. We show preliminary results using a setup with a Nao humanoid robot and a RealSense RGB-D sensor and employing OpenPose human skeleton estimation algorithm, and work in progress on a KUKA LBR iiwa platform.

Which limb is it? Responses to vibrotactile stimulation in early infancy

  • Autoři: Somogyi, E., Jacquey, L., Heed, T., doc. Mgr. Matěj Hoffmann, Ph.D., Lockman, J., Granjon, L., Fagard, J., O'Regan, J.K.
  • Publikace: British Journal of Developmental Psychology. 2018, 36(3), 384-401. ISSN 0261-510X.
  • Rok: 2018
  • DOI: 10.1111/bjdp.12224
  • Odkaz: https://doi.org/10.1111/bjdp.12224
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    This study focuses on how the body schema develops during the first months of life, by investigating infants' motor responses to localized vibrotactile stimulation on their limbs. Vibrotactile stimulation was provided by small buzzers that were attached to the infants' four limbs one at a time. Four age groups were compared cross-sectionally (3-, 4-, 5-, and 6-month-olds). We show that before they actually reach for the buzzer, which, according to previous studies, occurs around 7-8months of age, infants demonstrate emerging knowledge about their body's configuration by producing specific movement patterns associated with the stimulated body area. At 3months, infants responded with an increase in general activity when the buzzer was placed on the body, independently of the vibrator's location. Differentiated topographical awareness of the body seemed to appear around 5months, with specific responses resulting from stimulation of the hands emerging first, followed by the differentiation of movement patterns associated with the stimulation of the feet. Qualitative analyses revealed specific movement types reliably associated with each stimulated location by 6months of age, possibly preparing infants' ability to actually reach for the vibrating target. We discuss this result in relation to newborns' ability to learn specific movement patterns through intersensory contingency.

Development of reaching to the body in early infancy: From experiments to robotic models

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Chinn, L.K., Somogyi, E., Heed, T., Fagard, J., Lockman, J.J., O'Regan, J.K.
  • Publikace: Development and Learning and Epigenetic Robotics (ICDL-EpiRob), 2017 Joint IEEE International Conference on. Piscataway, NJ: IEEE, 2017. p. 112-119. ISSN 2161-9484. ISBN 978-1-5386-3715-9.
  • Rok: 2017
  • DOI: 10.1109/DEVLRN.2017.8329795
  • Odkaz: https://doi.org/10.1109/DEVLRN.2017.8329795
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    We have been observing how infants between 3 and 21 months react when a vibrotactile stimulation (a buzzer) is applied to different parts of their bodies. Responses included in particular movement of the stimulated body part and successful reaching for and removal of the buzzer. Overall, there is a pronounced developmental progression from general to specific movement patterns, especially in the first year. In this article we review the series of studies we conducted and then focus on possible mechanisms that might explain what we observed. One possible mechanism might rely on the brain extracting "sensorimotor contingencies" linking motor actions and resulting sensory consequences. This account posits that infants are driven by intrinsic motivation that guides exploratory motor activity, at first generating random motor babbling with self-touch occurring spontaneously. Later goal-oriented motor behavior occurs, with self-touch as a possible effective tool to induce informative contingencies. We connect this sensorimotor view with a second possible account that appeals to the neuroscientific concepts of cortical maps and coordinate transformations. In this second account, the improvement of reaching precision is mediated by refinement of neuronal maps in primary sensory and motor cortices - the homunculi - as well as in frontal and parietal cortical regions dedicated to sensorimotor processing. We complement this theoretical account with modeling on a humanoid robot with artificial skin where we implemented reaching for tactile stimuli as well as learning the "somatosensory homunculi". We suggest that this account can be extended to reflect the driving role of sensorimotor contingencies in human development. In our conclusion we consider possible extensions of our current experiments which take account of predictions derived from both these kinds of models.

Learning a Peripersonal Space Representation as a Visuo-Tactile Prediction Task

  • Autoři: Straka, Z., doc. Mgr. Matěj Hoffmann, Ph.D.,
  • Publikace: Artificial Neural Networks and Machine Learning – ICANN 2017, Part I. Springer, Cham, 2017. p. 101-109. Lecture Notes in Computer Science. vol. 10613. ISSN 0302-9743. ISBN 978-3-319-68599-1.
  • Rok: 2017
  • DOI: 10.1007/978-3-319-68600-4_13
  • Odkaz: https://doi.org/10.1007/978-3-319-68600-4_13
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The space immediately surrounding our body, or peripersonal space, is crucial for interaction with the environment. In primate brains, specific neural circuitry is responsible for its encoding. An important component is a safety margin around the body that draws on visuo-tactile interactions: approaching stimuli are registered by vision and processed, producing anticipation or prediction of contact in the tactile modality. The mechanisms of this representation and its development are not understood. We propose a computational model that addresses this: a neural network composed of a Restricted Boltzmann Machine and a feedforward neural network. The former learns in an unsupervised manner to represent position and velocity features of the stimulus. The latter is trained in a supervised way to predict the position of touch (contact). Unique to this model, it considers: (i) stimulus position and velocity, (ii) uncertainty of all variables, and (iii) not only multisensory integration but also prediction.

Simple or Complex Bodies? Trade-offs in Exploiting Body Morphology for Control

  • Autoři: doc. Mgr. Matěj Hoffmann, Ph.D., Müller, V.C.
  • Publikace: Representation and Reality in Humans, Other Living Organisms and Intelligent Machines. Springer, Cham, 2017. p. 335-345. Studies in Applied Philosophy, Epistemology and Rational Ethics. vol. 28. ISSN 2192-6255. ISBN 978-3-319-43782-8.
  • Rok: 2017
  • DOI: 10.1007/978-3-319-43784-2_17
  • Odkaz: https://doi.org/10.1007/978-3-319-43784-2_17
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Engineers fine-tune the design of robot bodies for control purposes; however, a methodology or set of tools is largely absent, and optimization of morphology (shape, material properties of robot bodies, etc.) is lagging behind the development of controllers. This has become even more prominent with the advent of compliant, deformable or ‘soft’ bodies. These carry substantial potential regarding their exploitation for control—sometimes referred to as ‘morphological computation’. In this article, we briefly review different notions of computation by physical systems and propose the dynamical systems framework as the most useful in the context of describing and eventually designing the interactions of controllers and bodies. Then, we look at the pros and cons of simple versus complex bodies, critically reviewing the attractive notion of ‘soft’ bodies automatically taking over control tasks. We address another key dimension of the design space—whether model-based control should be used and to what extent it is feasible to develop faithful models for different morphologies.

What is morphological computation? On how the body contributes to cognition and control

  • DOI: 10.1162/ARTL_a_00219
  • Odkaz: https://doi.org/10.1162/ARTL_a_00219
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “offloading computation from the brain to the body,” where the body is said to perform “morphological computation.” Our investigation of four characteristic cases of morphological computation in animals and robots shows that the “offloading” perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that facilitates control, (2) morphology that facilitates perception, and the rare cases of (3) morphological computation proper, such as reservoir computing, where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control—how it contributes to the overall orchestration of intelligent behavior.

Where is my forearm? Clustering of body parts from simultaneous tactile and linguistic input using sequential mapping

  • Autoři: Štěpánová, K., doc. Mgr. Matěj Hoffmann, Ph.D., Straka, Z., Klein, F.B., Cangelosi, A., Vavrečka, M.
  • Publikace: Kognice a umělý život XVII [Cognition and Artificial Life XVII]. Bratislava: Comenius University Bratislava, 2017. p. 155-162. ISBN 978-80-223-4346-6.
  • Rok: 2017
  • Pracoviště: Vidění pro roboty a autonomní systémy
  • Anotace:
    Humans and animals are constantly exposed to a continuous stream of sensory information from different modalities. At the same time, they form more compressed representations of concepts or symbols. In species that use language, this process is further structured by this interaction, where a mapping between the sensorimotor concepts and linguistic elements needs to be established. There is evidence that children might be learning language by simply disambiguating potential meanings based on multiple exposures to utterances in different contexts (cross-situational learning). In existing models, the mapping between modalities is usually found in a single step by directly using frequencies of referent and meaning co-occurrences. In this paper, we present an extension of this one-step mapping and introduce a newly proposed sequential mapping algorithm together with a publicly available Matlab implementation. For demonstration, we have chosen a less typical scenario: instead of learning to associate objects with their names, we focus on body representations. A humanoid robot is receiving tactile stimulations on its body, while at the same time listening to utterances of the body part names (e.g., hand, forearm, and torso). With the goal of arriving at the correct “body categories”, we demonstrate how a sequential mapping algorithm outperforms one-step mapping. In addition, the effect of data set size and noise in the linguistic input are studied.

Za stránku zodpovídá: Ing. Mgr. Radovan Suk