Data fusion plays a significant role in autonomous driving domain. Using an efficient combination of sensors like LiDAR, radar, and cameras could determine how quickly and accurately a vehicle makes all kinds of decisions related to road safety. In this article, we propose two approaches to improve object distance estimation by combining camera and LiDAR sensors. This work is inspired by the work presented in [2]. We propose to use instance segmentation and hierarchical clustering algorithms to resolve estimation errors generated when two or several bounding boxes (bbox) of detected objects overlap with each other. KITTI and Waymo databases were used to evaluate the accuracy of the proposed approaches. Finally, we compare the accuracy of our approaches with the accuracy proposed in [2] for some specific scenarios.
Improving Object Distance Estimation in Automated Driving Systems Using Camera Images, LiDAR Point Clouds and Hierarchical Clustering
2021-07-11
1227854 byte
Conference paper
Electronic Resource
English
/LiDAR AUTOMATED OBJECT ANNOTATION USING FUSED CAMERA/LiDAR DATA POINTS
European Patent Office | 2025
|/LiDAR AUTOMATED OBJECT ANNOTATION USING FUSED CAMERA/LiDAR DATA POINTS
European Patent Office | 2023
|/LiDAR AUTOMATED OBJECT ANNOTATION USING FUSED CAMERA/LiDAR DATA POINTS
European Patent Office | 2025