The main objective of few-shot semantic segmentation (FSSS) is to segment novel objects within query images by leveraging a limited set of support images. Being capable of segmenting the novel classes plays an essential role in the development of perception functions for automated vehicles. However, existing few-shot semantic segmentation work strives to improve the performance of the models on object-centric datasets. In our work, we evaluate the few-shot semantic segmentation on the more challenging driving scene understanding tasks. As a use case specific study, we give a systematic analysis of the disparity between commonly used FSSS datasets and driving datasets. Based on that, we proposed methodologies to integrate knowledge from the class hierarchy of the datasets, utilize more effective feature extraction, and choose more representative support images during inference. These approaches are evaluated extensively on the Cityscapes and Mapillary datasets to indicate their effectiveness. We point out the remaining challenges of training, evaluating, and employing FSSS models for complex road scenes in real practice.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Few-Shot Semantic Segmentation for Complex Driving Scenes


    Contributors:


    Publication date :

    2024-06-02


    Size :

    4830485 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Disparity weighted loss for semantic segmentation of driving scenes

    Loukkal, Abdelhak / Grandvalet, Yves / Li, You | IEEE | 2019


    Domain-Incremental Semantic Segmentation for Traffic Scenes

    Liu, Yazhou / Chen, Haoqi / Lasang, Pongsak et al. | IEEE | 2025


    Waterfall Segmentation of Complex Scenes

    Hanbury, A. / Marcotegui, B. | British Library Conference Proceedings | 2006



    Residual Pyramid Learning for Single-Shot Semantic Segmentation

    Chen, Xiaoyu / Lou, Xiaotian / Bai, Lianfa et al. | IEEE | 2020