3D object detection based on multi-camera is essential for autonomous driving systems. The current popular multi-camera detectors can be decoupled into multiple feature extraction modules with different functions. Such network paradigms often lead to the pre-trained image backbone module only receiving implicit and sparse supervision signals, and the image depth estimation has inherent ambiguity, which collectively limits the performance of the detector. In this paper, we present DenseBEV, a robust solution to bird’s-eye-view (BEV) 3D object detection with densely aggregated multi-view camera features. Specifically, we propose a novel multiview multiscale cross-attention (MMCA) module in the BEV decoder, which makes the BEV object query interact with multi-camera features in a standard attention paradigm. The aim of the MMCA is to make the pre-trained image backbone more easily optimized for 3D scenes for autonomous driving. Furthermore, we introduce contrastive denoising in the decoder head to assist in training the BEV detection model, alleviating the chaotic prediction problem caused by indistinct image depth estimations. Our method achieves significant improvements (i.e., +2.3NDS and +2.8mAP) over previous state-of-the-art BEV-based methods on the challenging nuScenes validation set. DenseBEV also achieved competitive results of 63.1 NDS and 55.4 mAP on the nuScenes test set.
Denoising Transformer for BEV 3D Object Detection via Multiview Multiscale Cross-Attention
IEEE Transactions on Intelligent Transportation Systems ; 26 , 7 ; 9387-9396
01.07.2025
2082209 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
Multiview feature distributions for object detection and continuous pose estimation
British Library Online Contents | 2014
|Image Denoising using Multiscale Directional Cosine Bases
British Library Conference Proceedings | 2005
|