Trabajo presentado en la IEEE International Conference on Robotics and Automation (ICRA), conferencia virtual celebrada del 31 de mayo al 31 de agosto de 2020 ; In this paper we present a Deep Reinforcement Learning approach to solve dynamic cloth manipulation tasks. Differing from the case of rigid objects, we stress that the followed trajectory (including speed and acceleration) has a decisive influence on the final state of cloth, which can greatly vary even if the positions reached by the grasped points are the same. We explore how goal positions for non-grasped points can be attained through learning adequate trajectories for the grasped points. Our approach uses few demonstrations to improve control policy learning, and a sparse reward approach to avoid engineering complex reward functions. Since perception of textiles is challenging, we also study different state representations to assess the minimum observation space required for learning to succeed. Finally, we compare different combinations of control policy encodings, demonstrations, and sparse reward learning techniques, and show that our proposed approach can learn dynamic cloth manipulation in an efficient way, i.e., using a reduced observation space, a few demonstrations, and a sparse reward. ; This work has been supported by the ERC project Clothilde (ERC-2016- ADG-741930), the HuMoUR project TIN2017-90086-R (AEI/FEDER, UE) and by the AEI through the María de Maeztu Seal of Excellence to IRI (MDM-2016-0656) ; Peer reviewed
Dynamic Cloth Manipulation with Deep Reinforcement Learning
01.01.2020
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
DDC: | 629 |
Non-Prehensile Aerial Manipulation using Model-Based Deep Reinforcement Learning
ArXiv | 2024
|