https://orcid.org/0000-0002-5177-5522
https://orcid.org/0000-0003-3201-002X
https://orcid.org/0000-0003-4361-956X
https://orcid.org/0000-0002-5177-5522
https://orcid.org/0000-0003-3201-002X
https://orcid.org/0000-0003-4361-956X
Contrast and part-whole relations induced by deep neural networks like Convolutional Neural Networks (CNNs) and Capsule Networks (CapsNets) have been known as two types of semantic cues for deep salient object detection. However, few works pay attention to their complementary properties in the context of saliency prediction. In this paper, we probe into this issue and propose a Type-Correlation Guidance Network (TCGNet) for salient object detection. Specifically, a Multi-Type Cue Correlation (MTCC) covering CNNs and CapsNets is designed to extract the contrast and part-whole relational semantics, respectively. Using MTCC, two correlation matrices containing complementary information are computed with these two types of semantics. In return, these correlation matrices are used to guide the learning of the above semantics to generate better saliency cues. Besides, a Type Interaction Attention (TIA) is developed to interact semantics from CNNs and CapsNets for the aim of saliency prediction. Experiments and analysis on five benchmarks show the superiority of the proposed approach. Codes has been released on https://github.com/liuyi1989/TCGNet.
TCGNet: Type-Correlation Guidance for Salient Object Detection
IEEE Transactions on Intelligent Transportation Systems ; 25 , 7 ; 6633-6644
2024-07-01
3684746 byte
Article (Journal)
Electronic Resource
English
| British Library Online Contents | 2017
Salient object detection: From pixels to segments
| British Library Online Contents | 2013
Salient object detection: manifold-based similarity adaptation approach
| British Library Online Contents | 2014
Salient Object Detection: A Discriminative Regional Feature Integration Approach
| British Library Online Contents | 2017