We present a comprehensive strategy for evaluating image retrieval algorithms. Because automated image retrieval is only meaningful in its service to people, performance characterization must be grounded in human evaluation. Thus we have collected a large data set of human evaluations of retrieval results, both for query by image example and query by text. The data is independent of any particular image retrieval algorithm and can be used to evaluate and compare many such algorithms without further data collection. The data and calibration software are available on-line. We develop and validate methods for generating sensible evaluation data, calibrating for disparate evaluators, mapping image retrieval system scores to the human evaluation results, and comparing retrieval systems. We demonstrate the process by providing grounded comparison results for several algorithms.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Evaluating image retrieval


    Contributors:


    Publication date :

    2005-01-01


    Size :

    691672 byte





    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    A method for evaluating the performance of content-based image retrieval systems

    Black, J.A. / Fahmy, G. / Panchanathan, S. | IEEE | 2002


    Evaluating content-based image retrieval techniques using perceptually based metrics [3647-16]

    Payne, J. S. / Hepplewhite, L. / Stonham, T. J. et al. | British Library Conference Proceedings | 1999


    A Method for Evaluating the Performance of Content-Based Image Retrieval Systems

    Black, J. A. / Fahmy, G. / Panchanathan, S. et al. | British Library Conference Proceedings | 2002


    Boosting Image Retrieval

    Tieu, K. / Viola, P. | British Library Online Contents | 2004


    Image retrieval with relevance feedback

    Li Fang, / Ang Yew Hock, | IEEE | 2000