Automated driving desires better performance on tasks like motion planning and interacting with pedestrians in mixed-traffic environments. Deep learning algorithms can achieve high performance in these tasks with remarkable visual scene understanding and generalization abilities. However, when common scene-parsing methods are used to train end-to-end models, limitations of explainability in such algorithms inhibit their implementations in fully automated driving. The main challenges include algorithm performance deficiencies and inconsistencies, insufficient AI transparency, degraded user trust, and undermining human-AI interactions. This research aids the decision-making performance and transparency of automated driving systems by providing multi-modal explanations, especially when interacting with pedestrians. The proposed algorithm combines global visual features and interrelation features by parsing scene images as self-constructed graphs and using an attention-based module to capture the interrelationship among the ego-vehicle and other traffic-related objects. The output modules make decisions while simultaneously generating semantic text explanations. The results show that the fusion of the features from global frames and interrelational graphs improves decision-making and explanation predictions compared to two state-of-the-art benchmark algorithms. The interrelation module also enhances algorithm transparency by disclosing the visual attention used for decision-making. The importance of interrelation features on the two prediction tasks is further revealed along with the underlying mechanism of multitask learning on the datasets with hierarchical labels. The proposed model improves driving decision-making during pedestrian interactions with intelligible reasoning cues for building an appropriate mental model of automated driving performance for human users.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Attention-Based Interrelation Modeling for Explainable Automated Driving


    Contributors:

    Published in:

    Publication date :

    2023-02-01


    Size :

    3384074 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Towards Explainable Semantic Segmentation for Autonomous Driving Systems by Multi-Scale Variational Attention

    Abukmeil, Mohanad / Genovese, Angelo / Piuri, Vincenzo et al. | IEEE | 2021


    Integrating Explainable AI to Enhance Dynamic Risk Assessment in Automated Driving Systems

    Patel, Anil Ranjitbhai / Gohler, Tom / Liggesmeyer, Peter | IEEE | 2024


    Trusting Explainable Autonomous Driving: Simulated Studies

    Goldman, Claudia V. / Bustin, Ronit | IEEE | 2022


    Image transformer for explainable autonomous driving system

    Dong, Jiqian / Chen, Sikai / Zong, Shuya et al. | IEEE | 2021


    On investigating drivers’ attention allocation during partially-automated driving

    Eddine, Reem Jalal / Mulatti, Claudio / Biondi, Francesco N. | Springer Verlag | 2024

    Free access