Vehicle segmentation is an important step in perception for autonomous driving vehicles, providing object-level environmental understanding. Its performance directly affects other functions in the autonomous driving car, including Decision-Making and Trajectory Planning. However, this task is challenging for planar LIDAR due to its limited vertical field of view (FOV) and quality of points. In addition, directly estimating 3D location, dimensions and heading of vehicles from an image is difficult due to the limited depth information of a monocular camera. We propose a method that fuses a vision-based instance segmentation algorithm and LIDAR-based segmentation algorithm to achieve an accurate 2D bird's-eye view object segmentation. This method combines the advantages of both camera and LIDAR sensor: the camera helps to prevent over-segmentation in LIDAR, and LIDAR segmentation removes false positive areas in the interest regions in the vision results. A modified T-linkage RANSAC is applied to further remove outliers. A better segmentation also results in a better orientation estimation. We achieved a promising improvement in average absolute heading error and 2D IOU on both a reduced-resolution KITTI dataset and our Cadillac SRX planar LIDAR dataset.
Camera-Based Semantic Enhanced Vehicle Segmentation for Planar LIDAR
2018-11-01
2084695 byte
Conference paper
Electronic Resource
English
TransRVNet: LiDAR Semantic Segmentation With Transformer
IEEE | 2023
|M2S-RoAD: Multi-Modal Semantic Segmentation for Road Damage Using Camera and LiDAR Data
ArXiv | 2025
|