In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.
Open-world driving scene segmentation via multi-stage and multi-modality fusion of vision-language embedding
2023-06-04
1419123 byte
Conference paper
Electronic Resource
English
Multi-Sensor Scene Segmentation
Springer Verlag | 2023
|Fusion of multi-modality volumetric medical imagery
IEEE | 2002
|Fusion of Multi-Modality Volumetric Medical Imagery
British Library Conference Proceedings | 2002
|