Multi-modal sensor fusion plays a vital role in achieving high-quality roadside perception for intelligent traffic monitoring. Unlike on-board sensors in autonomous driving, roadside sensors present heightened calibration complexity, posing a challenge to spatial alignment for data fusion. And existing spatial alignment methods typically focus on one-to-one alignment between cameras and radar sensors and require precise calibration. However, when applied to large-scale road-side monitoring networks, these methods can be difficult to implement and may be vulnerable to environmental influences. In this paper, we present a spatial alignment framework that utilizes geolocation cues to enable multi-view alignment across distributed multi-sensor systems. In this framework, a deep learning-based camera calibration model combined with angle and distance estimation is used for monocular geolocation estimation. A camera parameter approaching method is then used to search for pseudo camera parameters that can tolerate inevitable calibration errors in practice. Finally, the geolocation information is then used for data association between Light Detection and Ranging (LiDAR) and cameras. The framework has been conducted and tested at several intersections in Hangzhou. Experimental results show that the framework can achieve geolocation estimation errors of less than 1.1 m for vehicles traversing the monitored zone, demonstrating the framework's ability to accomplish spatial alignment with a singular execution, and apply it in extensive large-scale roadside sensor fusion scenarios.
A Spatial Alignment Framework Using Geolocation Cues for Roadside Multi-View Multi-Sensor Fusion
24.09.2023
2634643 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Fusion of Multi-View Silhouette Cues Using a Space Occupancy Grid
British Library Conference Proceedings | 2005
|Target trajectory fusion method based on multi-point roadside sensor
Europäisches Patentamt | 2024
|