Reliable ego-motion estimation is a crucial technology for autonomous vehicles. While progress has been made in deep odometry systems utilizing cameras and LiDAR, there is significant potential in exploring 4-D radar odometry due to radar's robustness against adverse weather and lighting conditions. Nevertheless, radar-based odometry faces several challenges: 1) radar point clouds are sparser and noisier than LiDAR point clouds; 2) radar points belonging to moving objects will cause interference to deep odometry; 3) the dependence on massive labeled data limits the practical application of supervised learning-based radar odometry. To address these challenges, this work proposes a self-supervised 4-D radar odometry. Specifically, we employ a multi-scale approach to extract robust features from sparse point clouds. Besides introducing several traditional LiDAR-based loss functions, we design a novel velocity-aware loss based on radar characteristics to achieve a self-supervised radar odometry. Moreover, we develop a point confidence estimation module to reduce the interference of moving objects and noise. We conduct comprehensive experiments on a public dataset to demonstrate the advanced performance of our method.
Self-supervised 4-D Radar Odometry for Autonomous Vehicles
24.09.2023
3797145 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Visual odometry and mapping for Underwater Autonomous Vehicles
Tema Archiv | 2009
|Self-supervised 3D keypoint learning for monocular visual odometry
Europäisches Patentamt | 2024
|