The paper is about learning using partial information in the form of equivalence constraints. Equivalence constraints provide relational information about the labels of data points, rather than the labels themselves. Our work is motivated by the observation that in many real life applications partial information about the data can be obtained with very little cost. For example, in video indexing we may want to use the fact that a sequence of faces obtained from successive frames in roughly the same location is likely to contain the same unknown individual. Learning using equivalence constraints is different from learning using labels and poses new technical challenges. In this paper we present three novel methods for clustering and classification, which use equivalence constraints. We provide results of our methods on a distributed image querying system that works on a large facial image database, and on the clustering and retrieval of surveillance data. Our results show that we can significantly improve the performance of image retrieval by taking advantage of such assumptions as temporal continuity in the data. Significant improvement is also obtained by making the users of the system take the role of distributed teachers, which reduces the need for expensive labeling by paid human labor.
Enhancing image and video retrieval: learning via equivalence constraints
2003-01-01
455455 byte
Conference paper
Electronic Resource
English
Enhancing Image and Video Retrieval: Learning via Equivalence Constraints
British Library Conference Proceedings | 2003
|Analog Video Image-Enhancing Device
NTRS | 1986
|Video Image Communication And Retrieval - Updated
NTRS | 1991
|Special issue on image and video retrieval evaluation
British Library Online Contents | 2010
|Content Based Image and Video Retrieval Using Embedded Text
British Library Conference Proceedings | 2006
|