Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop. While adversarial training appears to enhance the robustness and safety of a deep model deployed in open-world decision-critical applications, counterintuitively, it induces undesired behaviors in robot learning settings. In this paper, we show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects, namely transient, systematic, and conditional errors. We first generalize adversarial training to a safety-domain optimization scheme allowing for more generic specifications. We then prove that such a learning process tends to cause certain error profiles. We support our theoretical results by a thorough experimental safety analysis in a robot-learning task. Our results suggest that adversarial training is not yet ready for robot learning.
Adversarial training is not ready for robot learning
2021-01-01
Lechner M, Hasani R, Grosu R, Rus D, Henzinger TA. Adversarial training is not ready for robot learning. In: 2021 IEEE International Conference on Robotics and Automation . ICRA. ; 2021:4140-4147. doi: 10.1109/ICRA48506.2021.9561036
Conference paper
Electronic Resource
English
Ready for Production Animal-Friendly Milking Robot
British Library Online Contents | 1996
|Multi-Robot Coordination with Adversarial Perception
IEEE | 2025
|Combat ready, Russian air force tactical training examined
Online Contents | 1995