In the landscape of autonomous driving, Bird's-Eye-View (BEV) representation has recently garnered substantial attention, serving as a transformative framework for the fusion of multi-modal sensor inputs. The BEV paradigm effectively shifts the sensor fusion challenge from a rulebased methodology to a data-centric approach, thereby facilitating more nuanced feature extraction from an array of heterogeneous sensors. Notwithstanding its evident merits, the computational overhead associated with BEV-based techniques often mandates high-capacity hardware infrastructure, thus posing challenges for practical, real-world implementations. To mitigate this limitation, we introduce a novel contentaware multi-modal joint input pruning technique. Our method leverages BEV as a shared anchor to algorithmically identify and eliminate non-essential sensor regions prior to their introduction into the perception model's backbone. We validate the efficacy of our approach through extensive experiments on the NuScenes dataset, demonstrating substantial computational efficiency without sacrificing perception accuracy. To the best of our knowledge, this work represents the first attempt to alleviate the computational burden from the input pruning point.
Learning Content-Aware Multi-Modal Joint Input Pruning via Birds'-Eye-View Representation
24.09.2024
1475420 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Depth representation learning and fusion method based on multi-modal trajectory
Europäisches Patentamt | 2023
|Birds-eye-view image generation device, and birds-eye-view image generation method
Europäisches Patentamt | 2017
|