Various deep reinforcement learning (DRL) approaches have been applied for robot navigation with promising results. Besides avoiding pedestrians during navigation, a robot can also make sounds to aware nearby pedestrians, which normally results a more effective and safer moving trajectory in a crowded environment. However, it is challenging to train such an interactive navigation policy, i.e., outputs both navigation and interactive actions, in DRL. Generally, an interactive navigation policy needs to avoid collisions with pedestrians while reaching the target, make as few sounds as possible, and be robust to diverse environments while different pedestrians may respond differently to the sounds. In this paper, we propose a DRL based interactive navigation approach to meet above requirements. In specific, we first develop a simulation platform that can model different responses of pedestrians w.r.t. the sounds. Then we introduce a reward function that can effectively result proper interactive actions in diverse environments. At last, we use a PPO based method to train the interactive navigation policy. We evaluate our approach in various crowded pedestrian environments and compare its performance with multiple existing methods. The experimental results show that our approach meets the three requirements and outperforms others. Moreover, we show that the reward function can also be used to extend an existing DRL based navigation approach to learn an interactive navigation policy satisfying the requirements. We also deploy the trained policy on a robot and demonstrate its performance in real-world crowded environments.
Interactive Robot Navigation in Crowded Pedestrian Environments Based on Deep Reinforcement Learning
01.12.2022
1165909 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
ArXiv | 2025
|The Emotionally Intelligent Robot: Improving Social Navigation in Crowded Environments
ArXiv | 2019
|Dynamic trajectory planning for mobile robot navigation in crowded environments
BASE | 2016
|