Advancements in lane detection, leveraging semantic knowledge, have enhanced the detection capabilities of intelligent vehicles in various traffic scenarios. However, existing lane detection algorithms encounter difficulties in reliably extracting lane instances and adapting to simultaneous variations in lane numbers. The proposed study delves into the critical realm of visual scene understanding with a focus on the semantic knowledge of lane classes. The complexity of multi-class lane marking classification is highlighted due to the eccentric, bland, and repeatable nature of lane markings, with classification relying on relative locations. In this study, an effort has been undertaken to introduce a dataset named SemSeg-Lanes, which is derived from the BDD Lane Detection dataset. This dataset encompasses 10 distinct lane marking classes specifically designed for semantic segmentation. Various baseline models, relying on established methods for semantic segmentation, are presented for this demanding dataset. Among these, real-time networks, including PPLiteSeg variants, were incorporated into the baselines and trained to achieve a mean average precision exceeding 70%. The dataset presented enables exploration of fundamental challenges in lane marking segmentation, paving the way for applications such as lane-centric activity understanding, future event prediction, and continual learning.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Multi-class lane marking segmentation dataset for vision-based environmental perception in autonomous driving




    Publication date :

    2025-05-01




    Type of media :

    Article (Journal)


    Type of material :

    Electronic Resource


    Language :

    English




    Vision-based environmental perception for autonomous driving

    Liu, Fei / Lu, Zihao / Lin, Xianke | SAGE Publications | 2025

    Free access


    Driving assistant apparatus with lane marking

    WATANABE KAZUYA | European Patent Office | 2020

    Free access