Annotating image data at scale and at a high quality using professional annotators is expensive and time-consuming. This paper investigates a new cost-efficient and scalable source of annotators. We propose to leverage large pools of online gamers-so-called gaming crowds-to annotate images. Therefore, we investigate the use of gaming crowds for annotating images, addressing the growing need for labeled datasets in computer vision and machine learning. While gaming crowds offer a cost-efficient and readily available annotator pool, their performance varies across tasks. Our experiments on a custom image dataset reveal that, for certain categorical object annotation tasks, gaming crowds can match professional annotators in quality, albeit with more noise. Challenges arise in complex tasks, leading to higher ambiguity and reduced agreement with professionals, particularly in finer distinctions. Key insights gleaned from the study underscore the importance of task design in ensuring clarity and minimizing ambiguity, as well as the need for strategies to identify and mitigate the impact of malicious annotators while fostering sustained user engagement throughout the annotation process. A comprehensive analysis discusses challenges and opportuni-ties in gaming crowdsourcing for image annotation. Despite its potential, refining gaming crowdsourcing is essential for its successful integration into large-scale image annotation, benefiting computer vision and machine learning applications.
Are Gamers Good Annotators? A Comparative Study of Gaming Crowds and Professional Annotators
24.09.2024
2721218 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
British Library Online Contents | 2013
|Handling coney island's crowds
Engineering Index Backfile | 1920
Online Contents | 1996
Online Contents | 1998
Track exhibit drew large crowds
Engineering Index Backfile | 1953