This paper addresses the challenges of the roadside sensing infrastructure in providing reliable vehicle classification and positioning data, and sharing said data with connected and automated vehicles. For this purpose, a novel sensor fusion model is proposed, fusing data streams from traffic classification radar and video camera, leveraging the best characteristics of each sensor. The goal is to extend vehicles' sensing horizon while enhancing object classification performance and disseminating data with neighboring vehicles via Collective Perception Message (CPM). Our model incorporates two Artificial intelligence (AI) models for object identification and data fusion. Experimental results indicate a 23.73% improvement over the native radar classification under optimal conditions, showcasing the model's efficacy. Comprehensive data analysis reveals the model's resilience under diverse conditions, outperforming radar and independent camera classifications. The fused model corrects camera errors, achieving superior accuracy, especially in nighttime scenarios. This research contributes to safer and more efficient cooper-ative driving experiences, demonstrating the effectiveness of the sensor fusion approach in varying environmental conditions.
Sensor Fusion for Improved Cooperative Perception in CCAM
2024-06-24
7517331 byte
Conference paper
Electronic Resource
English