In recent years, deep learning-based end-to-end autonomous driving has become increasingly popular. However, deep neural networks are like black boxes. Their outputs are generally not explainable, making them not reliable to be used in real-world environments. To provide a solution to this problem, we propose an explainable deep neural network that jointly predicts driving actions and multimodal environment descriptions of traffic scenes, including bird-eye-view (BEV) maps and natural-language environment descriptions. In this network, both the context information from BEV perception and the local information from semantic perception are considered before producing the driving actions and natural-language environment descriptions. To evaluate our network, we build a new dataset with hand-labelled ground truth for driving actions and multimodal environment descriptions. Experimental results show that the combination of context information and local information enhances the prediction performance of driving action and environment description, thereby improving the safety and explainability of our end-to-end autonomous driving network.
Multimodal-XAD: Explainable Autonomous Driving Based on Multimodal Environment Descriptions
IEEE Transactions on Intelligent Transportation Systems ; 25 , 12 ; 19469-19481
01.12.2024
3191271 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Multimodal End-to-End Autonomous Driving
IEEE | 2022
|MULTIMODAL MOTION PLANNING FRAMEWORK FOR AUTONOMOUS DRIVING VEHICLES
Europäisches Patentamt | 2022
|