In autonomous driving, addressing occlusion scenarios is crucial yet challenging. Robust surrounding perception is essential for handling occlusions and aiding navigation. State-of-the-art models fuse LiDAR and Camera data to produce impressive perception results, but detecting occluded objects remains challenging. In this paper, we emphasize the crucial role of temporal cues in reinforcing resilience against occlusions in the bird’s eye view (BEV) semantic grid segmentation task. We proposed a novel architecture that enables the processing of temporal multi-step inputs, where the input at each time step comprises the spatial information encoded from fusing LiDAR and camera sensor readings. We experimented on the real-world nuScenes dataset and our results outperformed other baselines, with particularly large differences when evaluating on occluded and partially-occluded vehicles. Additionally, we applied the proposed model to downstream tasks, such as multi-step BEV prediction and trajectory forecasting of the ego-vehicle. The qualitative results obtained from these tasks underscore the adaptability and effectiveness of our proposed approach.
TLCFuse: Temporal Multi-Modality Fusion Towards Occlusion-Aware Semantic Segmentation
2024 IEEE Intelligent Vehicles Symposium (IV) ; 2110-2116
2024-06-02
1783795 byte
Conference paper
Electronic Resource
English
Hyperbolic Uncertainty Aware Semantic Segmentation
IEEE | 2024
|OCCLUSION AWARE SENSOR FUSION FOR EARLY CROSSING PEDESTRIAN DETECTION
British Library Conference Proceedings | 2019
|