Image segmentation has historically been a technique for analyzing terrain for military autonomous vehicles. One of the weaknesses of image segmentation from camera data is that it lacks depth information, and it can be affected by environment lighting. Light detection and ranging (LiDAR) is an emerging technology in image segmentation that is able to estimate distances to the objects it detects. One advantage of LiDAR is the ability to gather accurate distances regardless of day, night, shadows, or glare. This study examines LiDAR and camera image segmentation fusion to improve an advanced driver-assistance systems (ADAS) algorithm for off-road autonomous military vehicles. The volume of points generated by LiDAR provides the vehicle with distance and spatial data surrounding the vehicle. Processing these point clouds with semantic segmentation is a computationally intensive process requiring fusion of camera and LiDAR data so that the neural network can process depth and image data simultaneously. We create fused depth images by using a projection method from the LiDAR onto the images to create depth images (RGB-Depth). A neural network is trained to segment the fused data from RELLIS-3D, which is a multi-modal data set for off road robotics. This data set contains both LiDAR point clouds and corresponding RGB images for training the neural network. The labels from the data set are grouped as objects, traversable terrain, non-traversable terrain, and sky to balance underrepresented classes. Results on a modified version of DeepLabv3+ with a ResNet-18 backbone achieves an overall accuracy of 93.989 percent.


    Zugriff

    Zugriff prüfen

    Verfügbarkeit in meiner Bibliothek prüfen

    Bestellung bei Subito €


    Exportieren, teilen und zitieren



    Titel :

    Utilizing Neural Networks for Semantic Segmentation on RGB/LiDAR Fused Data for Off-road Autonomous Military Vehicle Perception


    Weitere Titelangaben:

    Sae Technical Papers


    Beteiligte:
    Selee, Bradley (Autor:in) / Faykus, Max Henry (Autor:in) / Smith, Melissa (Autor:in)

    Kongress:

    WCX SAE World Congress Experience ; 2023



    Erscheinungsdatum :

    11.04.2023




    Medientyp :

    Aufsatz (Konferenz)


    Format :

    Print


    Sprache :

    Englisch




    Utilizing Neural Networks for Semantic Segmentation on RGB/LiDAR Fused Data for Off-road Autonomous Military Vehicle Perception

    Faykus, Max Henry / Selee, Bradley / Smith, Melissa | British Library Conference Proceedings | 2023


    Enhanced Surface Reconstruction and Semantic Segmentation of LiDAR Data in Autonomous Vehicle Perception Systems

    Beni Prathiba, Sahaya / Kumar Raghu Kumar, Suriya / Kumar Anandhan, Deepak et al. | IEEE | 2025


    LISEG: LIGHTWEIGHT ROAD-OBJECT SEMANTIC SEGMENTATION IN 3D LIDAR SCANS FOR AUTONOMOUS DRIVING

    Zhang, Wenquan / Zhou, Chancheng / Yang, Junjie et al. | British Library Conference Proceedings | 2018


    LiSeg: Lightweight Road-object Semantic Segmentation In 3D LiDAR Scans For Autonomous Driving

    Zhang, Wenquan / Zhou, Chancheng / Yang, Junjie et al. | IEEE | 2018


    LiDAR Data Segmentation in Off-Road Environment Using Convolutional Neural Networks (CNN)

    Goodin, Chris / Carruth, Daniel / Dabbiru, Lalitha et al. | SAE Technical Papers | 2020