Existing RGB-D salient object detection (SOD) models are primarily trained on general-purpose datasets, which may lead to domain shift issues when applied directly to new, specific scenes, such as stereo traffic datasets. Though “large-scale datasets (COME15K and ReDweb-S)” have been released, they only partially address the domain shift problem. From the perspective of data augmentation, this paper presents a novel solution, which follows a weakly-supervised way to adapt generic RGB-D SOD models for specific scenarios, with a focus on traffic scene imagery. Our key idea is to equip plain videos (specific scenarios, i.e., traffic scenes) with newly estimated saliency informative depth maps and pseudo-SOD GTs, enabling them to support the retraining of existing RGB-D SOD models for meeting the requirements of these specific scenes. To achieve this, we offer a fresh perspective on how depth information can be leveraged in the SOD task and introduce a new paradigm for extracting intrinsic information from optical flows derived from videos to refine RGB-D SOD models. Our method achieves a 1.2% improvement in F-measure on RGB-D datasets and a 27% enhancement on real-world street view datasets compared to baseline models. These results demonstrate the effectiveness of our approach in enhancing model adaptability for traffic scene imagery, even with limited target domain data. Codes, datasets, and results are available at https://github.com/MengkeSong/AGSS.
Adapting Generic RGB-D Salient Object Detection for Specific Traffic Scenarios
IEEE Transactions on Intelligent Transportation Systems ; 26 , 8 ; 12329-12343
01.08.2025
13637142 byte
Aufsatz (Zeitschrift)
Elektronische Ressource
Englisch
British Library Online Contents | 2017
|Salient object detection: From pixels to segments
British Library Online Contents | 2013
|