Autonomous vehicles and the Advance Drive Assistance System (ADAS) use a number of sensors for the perception of surroundings. One of the frequently used sensors is Light Detection and Ranging (LiDAR). LiDAR gives accurate depth information of the objects around an autonomous vehicle. One of the vital steps of LiDAR-based perception is real-time object detection. Object detection must be performed reasonably well for safety-critical applications with reasonably good accuracy and low latency. Another vital parameter to consider is the detection at a larger distance, the area where LiDAR data becomes sparser. LiDAR-based object detection becomes challenging at more considerable distances as the point cloud becomes sparser. To mitigate this problem, a novel method for point density normalization has been adopted to make the detection less affected by distance. In this paper, an efficient LiDAR point cloud object detection, based on the YoloV4 model, has been proposed, which gives high detection accuracy with the ability to process more than 27 frames per second. For the evaluation of the proposed method of detection, the Kitti dataset was used.
BEV Approach Based Efficient Object Detection using YoloV4 for LiDAR Point Cloud
01.06.2023
1370265 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch