Multispectral pedestrian detection based on RGB-thermal (RGB-T) camera has been actively studied in autonomous driving in recent years as its robustness under complex traffic scenes. However, the fusion of multispectral data poses several challenges. Firstly, the fusion method requires dynamic adjustment of fusion weights considering environmental influences, such as illumination and temperature. Secondly, effective feature fusion necessitates addressing slight misalignment of visual sensors and enhancement of inconspicuous target's feature in traffic scenes. To solve problems above, we propose a novel network with three effective modules. In contrast to previous global fusion weight methods, the region-based illumination and temperature aware (RITA) module is proposed as dual pipeline structure to generate 5 regional fusion weights, which contains global and regional environmental information comprehensively. Additionally, compared to previous one-stage fusion strategies, a two-stage refined modality fusion is proposed by two modules. The spatial-aligned modal fusion (SAMF) module generates fusion features with large-scale spatial attention masks, which can enhance corresponding features and alleviate the slight misalignment between different modalities. The object-correlated cross-modality enhancement (OCE) module is proposed to complement effective features to fusion modality, which establishes inter-pedestrian relationships and enhance features of inconspicuous pedestrians. Experimental results of average miss rate on two challenging multispectral pedestrian datasets KAIST and CVC-14 achieve 7.64% and 21.3% respectively, and outperform competitive BAANet by 10.35% in miss rate of distant pedestrians in KAIST, demonstrating the advantages of our proposed method compared with state-of-the-art methods.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Region-Based Illumination-Temperature Awareness and Cross-Modality Enhancement for Multispectral Pedestrian Detection


    Contributors:
    Liu, Yanhao (author) / Hu, Chuan (author) / Zhao, Baixuan (author) / Huang, Yonghui (author) / Zhang, Xi (author)

    Published in:

    Publication date :

    2024-10-01


    Size :

    3740835 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    Multispectral pedestrian detection based on feature complementation and enhancement

    Linzhen Nie / Meihe Lu / Zhiwei He et al. | DOAJ | 2024

    Free access

    Multispectral pedestrian detection based on feature complementation and enhancement

    Nie, Linzhen / Lu, Meihe / He, Zhiwei et al. | Wiley | 2024

    Free access


    Toward Generalizable Multispectral Pedestrian Detection

    Chu, Fuchen / Cao, Jiale / Song, Zhanjie et al. | IEEE | 2024


    Incremental Cross-Modality deep learning for pedestrian recognition

    Pop, Danut Ovidiu / Rogozan, Alexandrina / Nashashibi, Fawzi et al. | IEEE | 2017