Surround view for automotive are being performed using cameras to provide a complete view of the vehicle surrounding for parking assist or to avoid any blind spot while driving. Radar based surround view for automotive further improves the object detection capability. This paper explains the point cloud processing that has been performed on multi sensor Radar data and the increased advantage for further processing. The initial processing handles the merging of four radar sensors’ point cloud data which are positioned on four sides of the automobile. Once the data is merged by appropriate transformation, based on sensor position and facing angle, a Bayesian approach to grouping of object points which appear in the overlapping field of view is implemented to avoid it being detected as multiple objects. This merged object provides additional information which can be effectively utilized for pedestrian classification. This paper details the specific challenges related to achieving surround view by using a 77GHz radar sensor and the advantages in pedestrian classification.
Bayesian Grouping of Multi Sensor Radar Fusion for effective Pedestrian Classification in Automotive Surround View
23.11.2020
911240 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
SENSOR FUSION BASED PERCEPTUALLY ENHANCED SURROUND VIEW
Europäisches Patentamt | 2021
|Sensor fusion based perceptually enhanced surround view
Europäisches Patentamt | 2021
|SENSOR FUSION BASED PERCEPTUALLY ENHANCED SURROUND VIEW
Europäisches Patentamt | 2022
|Sensor fusion based perceptually enhanced surround view
Europäisches Patentamt | 2022
|