Freespace detection is an important part of autonomous driving technology. Compared with structured on-road scenes, unstructured off-road scenes face more challenges. Multi-modal fusion method is a viable solution to these challenges. But existing fusion methods do not fully utilize the multi-modal features. In this paper, we propose an effective multi-modal network named M2F2-Net for freespace detection in unstructured off-road scenes. We propose a multi-modal feature fusion strategy named Multi-modal Cross Fusion (MCF). MCF module is simple but effective in fusing the features of RGB images and surface normal maps. Meanwhile, a multi-modal segmentation decoder module is designed to decouple the segmentation of two modalities, and it further helps the features of both modalities to be fully utilized. In order to solve the problem that the road edge is difficult to extract in the unstructured scenes, we also propose an edge segmentation decoder module. Extensive experiments show that our approach can lead to significant improvements, which brings 6.1% F1 and 10.8% IoU improvements. Our code will be available at https://github.com/yhl1010/M2F2-Net.
M2F2-Net: Multi-Modal Feature Fusion for Unstructured Off-Road Freespace Detection
2023-06-04
3160808 byte
Conference paper
Electronic Resource
English
Communications: Getting more freespace optical data; a boost for ``Li-Fi''
British Library Online Contents | 2016
Unstructured road detection via combining the model-based and feature-based methods
IET | 2019
|