The way people drive vehicles has a great impact on traffic safety, fuel consumption, and passenger experience. Many research and commercial efforts today have primarily leveraged the Inertial Measurement Unit (IMU) to characterize, profile, and understand how well people drive their vehicles. In this paper, we observe that such IMU data alone cannot always reveal a driver’s context and therefore does not provide a comprehensive understanding of a driver’s actions. We believe that an audio-visual infrastructure, with cameras and microphones, can be well leveraged to augment IMU data to reveal driver context and improve analytics. For instance, such an audio-visual system can easily discern whether a hard braking incident, as detected by an accelerometer, is the result of inattentive driving (e.g., a distracted driver) or evidence of alertness (e.g., a driver avoids a deer).The focus of this work has been to design a relatively low-cost audio-visual infrastructure through which it is practical to gather such context information from various sensors and to develop a comprehensive understanding of why a particular driver may have taken different actions. In particular, we build a system called DrivAid, that collects and analyzes visual and audio signals in real time with computer vision techniques on a vehicle-based edge computing platform, to complement the signals from traditional motion sensors. Driver privacy is preserved since the audio-visual data is mainly processed locally. We implement DrivAid on a low-cost embedded computer with GPU and high-performance deep learning inference support. In total, we have collected more than 1550 miles of driving data from multiple vehicles to build and test our system. The evaluation results show that DrivAid is able to process video streams from 4 cameras at a rate of 10 frames per second. DrivAid can achieve an average of 90% event detection accuracy and provide reasonable evaluation feedbacks to users in real time. With the efficient design, for a single trip, only around 36% of audio-visual data needs to be analyzed on average.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    DrivAid: Augmenting Driving Analytics with Multi-Modal Information


    Contributors:
    Qi, Bozhao (author) / Liu, Peng (author) / Ji, Tao (author) / Zhao, Wei (author) / Banerjee, Suman (author)


    Publication date :

    2018-12-01


    Size :

    21967090 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Augmenting autonomous driving with remote viewer recommendation

    VAUGHN ROBERT / BARON CASEY | European Patent Office | 2023

    Free access

    AUGMENTING AUTONOMOUS DRIVING WITH REMOTE VIEWER RECOMMENDATION

    VAUGHN ROBERT / BARON CASEY | European Patent Office | 2023

    Free access

    Augmenting autonomous driving with remote viewer recommendation

    VAUGHN ROBERT / BARON CASEY | European Patent Office | 2024

    Free access

    TopoTP: Augmenting Driving Topology Reasoning with Dynamic Traffic Participants

    Yao, Ziying / Xiong, Zhongxia / Liu, Xuan et al. | IEEE | 2024


    Augmenting ADS-B with Traffic Information Service

    Zeitlin, A. D. / Strain, R. C. / IEEE Aerospace and Electronics Systems Society | British Library Conference Proceedings | 2003