The paper presents a novel approach for image retrieval by combining textual and object-based visual features in order to reduce the inconsistency between the subjective user's similarity interpretation and the retrieval results produced by objective similarity models. A novel multi-scale segmentation framework is proposed to detect prominent image objects. These objects are clustered according to their visual features and mapped to related words determined by psychophysical studies. Furthermore, a hierarchy of words expressing higher-level meaning is determined on the basis of natural language processing and user evaluation. Experiments conducted on a large set of natural images showed that higher retrieval precision in terms of estimating user retrieval semantics could be achieved via this two-layer word association and also by supporting various query specifications and options.
Combining words and object-based visual features in image retrieval
2003-01-01
1636114 byte
Conference paper
Electronic Resource
English
Combining Words and Object-Based Visual Features in Image Retrieval
British Library Conference Proceedings | 2003
|British Library Online Contents | 2008
|British Library Conference Proceedings | 2001
|