Artificial intelligence and deep learning are rev-olutionizing modern society. While deep-learning-based approaches gradually start dominating the area of the 2D image and 3D point cloud processing, traditional rule-based algorithms remain the mainstream solution in radar signal processing. To unify the approaches for image, radar, and lidar processing for autonomous driving and take advantage of neural networks, the automotive industry is actively seeking solutions for integrating radar into a deep-learning-based sensor fusion framework. In this paper, we propose deep neural networks that take raw radar data as input and generate occupancy grids within the radar’s field of view. Furthermore, we integrate the developed system in a test vehicle and demonstrate it on public roads. The proposed networks are the first step towards solving a multiple target detection problem with radar data using deep learning approach.
Occupancy Grids Generation Using Deep Radar Network for Autonomous Driving
01.10.2019
2858285 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Obstacle Fusion and Scene Interpretation for Autonomous Driving with Occupancy Grids
TIBKAT | 2023
|Dual Inverse Sensor Model for Radar Occupancy Grids
IEEE | 2019
|Danger detection using occupancy grids for autonomous systems and applications
Europäisches Patentamt | 2023
|HAZARD DETECTION USING OCCUPANCY GRIDS FOR AUTONOMOUS SYSTEMS AND APPLICATIONS
Europäisches Patentamt | 2023
|