This paper introduces a novel method utilizing generative adversarial networks (GANs) for enhancement and semantic segmentation of radar images for autonomous navigation applications. Radar sensors are known for their robustness in adverse weather conditions compared to other perception sensors such as LiDAR and cameras. However, their application in autonomous vehicles (AVs) is often limited due to the low-resolution data they produce. The primary aim of this study is to enhance radar images captured by AVs, enabling these vehicles to rely on radar sensors for object identification and semantic segmentation in all weather conditions. The training of the GAN was performed using ground truth images derived from high-resolution LiDAR point cloud maps and radar images collected in good weather conditions. These ground truth images were generated through a customized LiDAR scan accumulation method, followed by a two-dimensional (2D) projection and cropping process. Additionally, a customized data augmentation method was employed during the training process to improve the performance of the proposed method in adverse weather conditions. During the inference phase, our approach exclusively uses radar images to produce enhanced and semantically segmented versions of the input radar images. The effectiveness of the proposed method is validated through both qualitative and quantitative results, demonstrating its capability to generate enhanced and semantically segmented images from radar images in all weather conditions. The supplementary materials, including the inference code, sample testing data and GAN models, are available in our GitHub repository. https://github.com/thaki94/riess-gan
All Weather Radar Image Enhancement and Semantic Segmentation Method for Autonomous Vehicles
IEEE Transactions on Intelligent Transportation Systems ; 26 , 8 ; 12254-12266
01.08.2025
3039657 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch