As 3D LiDAR point cloud and 2D images capture complementary information for autonomous driving, great efforts are made on 3D semantic segmentation using both modalities data. However, they suffer from different problems. 3D-to-2D fusion methods is difficult to determine accurate mapping relations, and the moving objects cannot be carefully considered. 2D-to-3D fusion methods need to process strictly paired data simultaneously, which is time-consuming and impractical in real-time scenarios. In this paper, a novel image-guided knowledge distillation framework based on tri-plane-view is proposed for 3D semantic segmentation. Our method has two main contributions. First, the image features are represented in an efficient 3D tri-plane-view space, which facilitates features alignment and fusion. Second, the object movements can be predicted in such a unified 3D space to fully utilize the time information. The fusion data knowledge is transferred to pure 3D network using knowledge distillation, so only the point cloud branch is needed during inference and thus achieving real-time deployment. Our method is evaluated on SemanticKITTI and nuScenes dataset as well as outdoor environments. As a result, models based on point cloud inputs are significantly improved after applying our method.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    TPV-IGKD: Image-Guided Knowledge Distillation for 3D Semantic Segmentation With Tri-Plane-View


    Contributors:
    Li, Jia-Chen (author) / Lu, Jun-Guo (author) / Wei, Ming (author) / Kang, Hong-Yi (author) / Zhang, Qing-Hao (author)


    Publication date :

    2024-08-01


    Size :

    4567311 byte




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English



    TransKD: Transformer Knowledge Distillation for Efficient Semantic Segmentation

    Liu, Ruiping / Yang, Kailun / Roitberg, Alina et al. | IEEE | 2024


    Morphology-Guided Network via Knowledge Distillation for RGB-D Mirror Segmentation

    Zhou, Wujie / Cai, Yuqi / Qiang, Fangfang | IEEE | 2024


    Efficient Semantic Segmentation via Self-Attention and Self-Distillation

    An, Shumin / Liao, Qingmin / Lu, Zongqing et al. | IEEE | 2022


    Dense Top-View Semantic Completion With Sparse Guidance and Online Distillation

    Gu, Shuo / Lu, Jiacheng / Yang, Jian et al. | IEEE | 2024