Accurate vision-based driver attention estimation is a challenging task due to the limitations of the visual sensor, and it is a critical and fundamental function of building a human-centered intelligent driving system. Unlike previous investigations which consider it a classification task, this study newly introduces scenario contextual information to improve the accuracy and obtain a fine-grained estimation. Therefore, a data-driven hybrid architecture for context-aware driver attention estimation is proposed to jointly model the scene and state of the driver during driving. A visual saliency map is typically assumed to highlight a distinct area that can capture human attention. To leverage this characteristic, a multi-hierarchy fusion network is proposed to extract effectively saliency features of a scene image. A gaze-tracking network is employed to estimate the potential focus zone of the driver, and this coarse estimation is optimized subsequently using the extracted saliency information to obtain a fine-grained estimation. Three related and commonly used task-agnostic and task-driven datasets are adopted to evaluate the proposed saliency estimation model, and experimental results show that it can achieve state-of-the-art performance. To verify the joint modeling methodology, two new driving attention datasets supplemented with driver information are collected based on the existing ones. The results of comparative experiments indicate that the consideration of saliency features can significantly improve the estimation performance of gaze fixation, demonstrating the feasibility and efficiency of the proposed method.
Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking
IEEE Transactions on Intelligent Transportation Systems ; 25 , 8 ; 8602-8614
2024-08-01
2396047 byte
Article (Journal)
Electronic Resource
English