Humans, interacting with automated machines, expect certain behaviors: either because they have experienced this behavior (e.g., driving) or because they build such expectations from the machine (e.g., a user would expect from an AI based personal assistant to recognize all the sentences they might tell in any accent). In reality, these advanced AI systems might not behave perfectly or their optimal decisions might also differ from the subjective optimal decisions a human user might expect. This becomes a challenging problem when considering AI decision making algorithms, controlling the complex behaviors of autonomous vehicles, affected by their uncertain environments and their own sensing suites. This paper presents results from two large, on-line user studies, run in simulated autonomous driving scenarios. Our goal was to assess users’ trust in the automated behaviors, presented with different explanations and HMI solutions. We found that specific explanations, considering the risk of a driving scenario and what the vehicle is planning to do can reduce discomfort and increase understanding of an automated driving maneuver. We also present a data-driven solution to infer an explanation automatically and probabilistically that is the most suitable for a driving context and user’s group according to the data analysis and trust measures examined.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Trusting Explainable Autonomous Driving: Simulated Studies


    Beteiligte:


    Erscheinungsdatum :

    05.06.2022


    Format / Umfang :

    798194 byte




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Trusting Autonomous Machine Intelligence

    N. Alexandrov | NTIS | 2021



    Image transformer for explainable autonomous driving system

    Dong, Jiqian / Chen, Sikai / Zong, Shuya et al. | IEEE | 2021


    Trusting your senses

    Hardman, G. | Tema Archiv | 2008


    Grounded Relational Inference: Domain Knowledge Driven Explainable Autonomous Driving

    Tang, Chen / Srishankar, Nishan / Martin, Sujitha et al. | IEEE | 2024