This chapter covers how 3D data is represented and processed using voxels, point clouds, and meshes, with methods like PointNet and DGCNN. It discusses early and late fusion strategies for combining sensor data, emphasizing LiDARcamera fusion techniques such as Frustum PointNets and PointPainting to improve object detection. Additionally, feature-level fusion methods like DeepFusion and BEVFusion improve 3D perception by aligning sensor data for more accurate tracking and detection.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Robot Perception: 3D Data and Sensor Fusion


    Contributors:

    Published in:

    AI for Robotics ; Chapter : 3 ; 107-137


    Publication date :

    2025-05-03


    Size :

    31 pages




    Type of media :

    Article/Chapter (Book)


    Type of material :

    Electronic Resource


    Language :

    English




    Multi-sensor fusion mapping robot and data fusion method

    SAN HONGJUN / PENG ZHEN / LI CHUNLEI et al. | European Patent Office | 2023

    Free access

    Data Fusion in Multi Sensor Platforms for Widearea Perception

    Polychronopoulos, A. / Floudas, N. / Amditis, A. et al. | British Library Conference Proceedings | 2006


    Perception of Microburst Based on Multi-Sensor Data Fusion

    Lei, X. / Zhu, B. | British Library Online Contents | 2011


    Perception Sensor for a Mobile Robot

    Hou, K. M. / Belloum, A. / Yao, E. et al. | British Library Conference Proceedings | 1995


    Data fusion in multi sensor platforms for wide-area perception

    Polychronopoulos, A. / Floudas, N. / Amditis, A. et al. | IEEE | 2006