Vehicular cooperative perception enhances the reliability and safety of autonomous driving systems by sharing perception information among vehicles. However, it often leads to issues of information redundancy and communication resource waste. To address the challenge of decreased communication efficiency due to frequent messaging, this paper proposes a joint selection method for collaborative agents and content. Firstly, we model the problem of jointly selecting collaborators and cooperative content as a parameterized action Markov decision process, where the action space is represented as a Multi-Discrete Action Space. Secondly, to improve perception performance and reduce communication resource consumption of vehicles, a deep reinforcement learning-based late fusion method is proposed to decouple the problem into two parts: cooperative agent selection managed by the Road Side Unit (RSU) and content selection handled by the vehicles. Finally, experimental results demonstrate that the proposed joint selection method with Dueling Deep Q-Networks (Dueling DQN) for cooperative perception achieves superior performance in improving perception confidence scores and reducing communication consumption.
Cooperative Perception with Deep Reinforcement Learning in Vehicular Networks
2024-12-20
940545 byte
Conference paper
Electronic Resource
English
Cooperative perception in vehicular networks using multi-agent reinforcement learning
BASE | 2021
|Vehicular cooperative perception through action branching and federated reinforcement learning
BASE | 2022
|Vehicular Cooperative Perception Through Action Branching and Federated Reinforcement Learning
ArXiv | 2020
|COOPERATIVE PERCEPTION WITH DEEP REINFORCEMENT LEARNING FOR CONNECTED VEHICLES
British Library Conference Proceedings | 2020
|