Multi-modal sensor fusion plays a vital role in achieving high-quality roadside perception for intelligent traffic monitoring. Unlike on-board sensors in autonomous driving, roadside sensors present heightened calibration complexity, posing a challenge to spatial alignment for data fusion. And existing spatial alignment methods typically focus on one-to-one alignment between cameras and radar sensors and require precise calibration. However, when applied to large-scale road-side monitoring networks, these methods can be difficult to implement and may be vulnerable to environmental influences. In this paper, we present a spatial alignment framework that utilizes geolocation cues to enable multi-view alignment across distributed multi-sensor systems. In this framework, a deep learning-based camera calibration model combined with angle and distance estimation is used for monocular geolocation estimation. A camera parameter approaching method is then used to search for pseudo camera parameters that can tolerate inevitable calibration errors in practice. Finally, the geolocation information is then used for data association between Light Detection and Ranging (LiDAR) and cameras. The framework has been conducted and tested at several intersections in Hangzhou. Experimental results show that the framework can achieve geolocation estimation errors of less than 1.1 m for vehicles traversing the monitored zone, demonstrating the framework's ability to accomplish spatial alignment with a singular execution, and apply it in extensive large-scale roadside sensor fusion scenarios.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    A Spatial Alignment Framework Using Geolocation Cues for Roadside Multi-View Multi-Sensor Fusion


    Beteiligte:
    Zhao, Zhiguo (Autor:in) / Li, Yong (Autor:in) / Chen, Yunli (Autor:in) / Zhang, Xiaoting (Autor:in) / Tian, Rui (Autor:in)


    Erscheinungsdatum :

    24.09.2023


    Format / Umfang :

    2634643 byte





    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Fusion of Multi-View Silhouette Cues Using a Space Occupancy Grid

    Franco, J.-S. / Boyer, E. / IEEE | British Library Conference Proceedings | 2005


    Target trajectory fusion method based on multi-point roadside sensor

    HUANG ZILI | Europäisches Patentamt | 2024

    Freier Zugriff

    Roadside multi-sensor data fusion based on adaptive federal Kalman filtering

    Chai, Congcheng / Yang, Tao / Lyu, Nengchao | IEEE | 2023


    A Method of Lane Departure Identification Based on Roadside Multi-Sensor Fusion

    Liu, Pengfei / Yu, Guizhen / Zhou, Bin et al. | ASCE | 2020


    A Method of Lane Departure Identification Based on Roadside Multi-Sensor Fusion

    Liu, Pengfei / Yu, Guizhen / Zhou, Bin et al. | TIBKAT | 2020