Vision-based future vehicle localization provides intuitive trajectory prediction, serving as a critical foundation for Advanced Driving Assistance Systems (ADAS) to formulate collision avoidance decisions. Among existing approaches, ego-view trajectory prediction has proven effective for driver monitoring and intervention in vision-based localization. This method aligns closely with human perceptual processing, making it essential for the Driver-in-the-Loop (DIL) development stage in modern ADAS. However, most existing ego-view trajectory prediction approaches rely on two-dimensional image-based predictions, creating a gap with human three-dimensional perception. This disparity negatively impacts the accuracy and timeliness of driver decision-making and intervention. In this paper, we propose MFV3DL (Monocular Vision Method for Future Vehicle 3D Localization), a dual-stream framework integrating 2D image trajectory prediction and depth prediction to achieve future vehicle 3D localization. To enhance accuracy, we leverage Multi-Object Tracking and Segmentation (MOTS) results and depth estimation as inputs for the dual-stream architecture. Additionally, we introduce a Related Information Fusion (RIF) unit to enable cross-modal interaction between the two streams. For depth stream predictions, we propose a ConvLSTM-based depth prediction method. Experimental results on the KITTI dataset demonstrate that MFV3DL outperforms state-of-the-art methods. In diverse driving scenarios, MFV3DL achieves superior 3D visualization results compared to 2D trajectory-based predictions. Baseline comparisons and ablation studies further validate that the proposed ConvLSTM-based depth prediction enhances the dual-stream architecture and RIF unit for 3D localization tasks.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    MFV3DL: Monocular Vision Method for Future Vehicle 3D Localization


    Beteiligte:
    Zhang, Wen (Autor:in) / Sun, Zhexuan (Autor:in) / Lv, Shengrong (Autor:in) / Mei, Konghao (Autor:in) / Chen, Guangkun (Autor:in) / Yang, Zhenya (Autor:in)


    Erscheinungsdatum :

    01.07.2025


    Format / Umfang :

    3159773 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch




    Research on the Localization of Unmanned Flight Vehicle Based on the Monocular Vision

    Liang, Zhuang / Shen, Haomin / Chen, Tianyu et al. | TIBKAT | 2022


    Research on the Localization of Unmanned Flight Vehicle Based on the Monocular Vision

    Liang, Zhuang / Shen, Haomin / Chen, Tianyu et al. | Springer Verlag | 2021


    Monocular Vision for Mobile Robot Localization and Autonomous Navigation

    Royer, E. / Lhuillier, M. / Dhome, M. et al. | British Library Online Contents | 2007


    Research on the Localization of Unmanned Flight Vehicle Based on the Monocular Vision

    Liang, Zhuang / Shen, Haomin / Chen, Tianyu et al. | British Library Conference Proceedings | 2022