Owing to the shortage of computing resources for autonomous vehicles and redundant modeling among similar tasks, multi-task models have become a feasible solution. The multi-task prediction model of autonomous vehicles refers to the realization of trajectory, behavior, and risk predictions through a multi-task deep neural network. However, whether the multi-task prediction networks can effectively share information between multiple inputs and whether the shared representations are interpretable remains a concern. To address the aforementioned concerns, this study proposes a multi-source multi-dimensional model interpretation (M3-interpretation) method for multi-task prediction neural network (MPNN). The MPNN proposed in this paper is designed with a structure that emphasizes a “task-specific pipeline as the main, high-level semantic information sharing as the supplement”. Then, based on the information entropy theory, this study creatively extends the information bottleneck attribution method to M3 and uses feature masks to display fine-grained interpretation results. Comparison and ablation experiments using naturalistic trajectory datasets indicated that the proposed model has better prediction performance than single-task models. In addition, fine-grained attribution analysis was conducted on specific behaviors in temporal, spatial, and feature dimensions to explore the laws that affect behavioral inference in MPNN.
Interpretable Multi-Task Prediction Neural Network for Autonomous Vehicles
IEEE Transactions on Intelligent Transportation Systems ; 26 , 6 ; 7554-7572
2025-06-01
6560538 byte
Article (Journal)
Electronic Resource
English
INTERPRETABLE KALMAN FILTER COMPRISING NEURAL NETWORK COMPONENT(S) FOR AUTONOMOUS VEHICLES
European Patent Office | 2024
|Interpretable Safety Validation for Autonomous Vehicles
IEEE | 2020
|Neural Network Based Lane Change Trajectory Prediction in Autonomous Vehicles
Tema Archive | 2011
|