The autonomous driving, which integrates wireless communication, intelligent computing and environmental perception, not only improves traffic safety and reduces vehicle accidents, but also alleviates traffic congestion by optimizing traffic flow and brings comfortable and convenient travel experience to passengers. However, current autonomous driving technology is generally at the L3-L4 levels and faces many challenges, such as semantic segmentation. Semantic segmentation enables vehicles to correctly distinguish the surrounding environment, such as roads, vehicles, pedestrians, etc. It can assist drivers in perceiving the surrounding environment well and making correct decisions, improving driving safety. However, current semantic segmentation work mostly focuses on improving recognition accuracy and neglects inference speed. Slightly higher latency can easily prevent vehicles from making timely and correct decisions, leading to accidents such as car crashes. Therefore, we propose a Multi-Level Real-time Fusion Semantic Segmentation Network (MLRFNet) that improves inference speed while ensuring high semantic segmentation accuracy for autonomous driving. The MLRFNet utilizes two lightweight branches, achieving effectively extract RGB and depth features with low computational cost. In addition, the Feature Fusion Module (FFM) aggregates complementary features from them, while the Cross-Level Refine Module (CRM) merges high-level semantic features and low-level spatial information. Extensive experiments demonstrate that MLRFNet significantly improves the inference speed while ensuring high accuracy. On the Cityscapes validation set, MLRFNet achieves 251.8 FPS and 71.4% mIoU for 512 × 1024 images inputs.
MLRFNet: Multi-Level Real-Time Fusion Semantic Segmentation Network for Autonomous Driving
24.03.2025
7722418 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch