Multispectral pedestrian detection based on RGB-thermal (RGB-T) camera has been actively studied in autonomous driving in recent years as its robustness under complex traffic scenes. However, the fusion of multispectral data poses several challenges. Firstly, the fusion method requires dynamic adjustment of fusion weights considering environmental influences, such as illumination and temperature. Secondly, effective feature fusion necessitates addressing slight misalignment of visual sensors and enhancement of inconspicuous target's feature in traffic scenes. To solve problems above, we propose a novel network with three effective modules. In contrast to previous global fusion weight methods, the region-based illumination and temperature aware (RITA) module is proposed as dual pipeline structure to generate 5 regional fusion weights, which contains global and regional environmental information comprehensively. Additionally, compared to previous one-stage fusion strategies, a two-stage refined modality fusion is proposed by two modules. The spatial-aligned modal fusion (SAMF) module generates fusion features with large-scale spatial attention masks, which can enhance corresponding features and alleviate the slight misalignment between different modalities. The object-correlated cross-modality enhancement (OCE) module is proposed to complement effective features to fusion modality, which establishes inter-pedestrian relationships and enhance features of inconspicuous pedestrians. Experimental results of average miss rate on two challenging multispectral pedestrian datasets KAIST and CVC-14 achieve 7.64% and 21.3% respectively, and outperform competitive BAANet by 10.35% in miss rate of distant pedestrians in KAIST, demonstrating the advantages of our proposed method compared with state-of-the-art methods.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Region-Based Illumination-Temperature Awareness and Cross-Modality Enhancement for Multispectral Pedestrian Detection


    Beteiligte:
    Liu, Yanhao (Autor:in) / Hu, Chuan (Autor:in) / Zhao, Baixuan (Autor:in) / Huang, Yonghui (Autor:in) / Zhang, Xi (Autor:in)

    Erschienen in:

    Erscheinungsdatum :

    01.10.2024


    Format / Umfang :

    3740835 byte




    Medientyp :

    Aufsatz (Zeitschrift)


    Format :

    Elektronische Ressource


    Sprache :

    Englisch



    Multispectral pedestrian detection based on feature complementation and enhancement

    Linzhen Nie / Meihe Lu / Zhiwei He et al. | DOAJ | 2024

    Freier Zugriff

    Multispectral pedestrian detection based on feature complementation and enhancement

    Nie, Linzhen / Lu, Meihe / He, Zhiwei et al. | Wiley | 2024

    Freier Zugriff


    Toward Generalizable Multispectral Pedestrian Detection

    Chu, Fuchen / Cao, Jiale / Song, Zhanjie et al. | IEEE | 2024


    Incremental Cross-Modality Deep Learning for Pedestrian Recognition

    Pop, Danut Ovidiu / Rogozan, Alexandria / Nashashibi, Fawzi et al. | British Library Conference Proceedings | 2017