The influential Text REtrieval Conference (TREC) retrieval conference
has always relied upon specialist assessors or occasionally
participating groups to create relevance judgements for the tracks that
it runs. Recently however, crowdsourcing has been championed as a cheap,
fast and effective alternative to traditional TREC-like assessments. In
2010, TREC tracks experimented with crowdsourcing for the very first
time. In this paper, we report our successful experience in creating
relevance assessments for the TREC Blog track 2010 top news stories task
using crowdsourcing. In particular, we crowdsourced both real-time
newsworthiness assessments for news stories as well as traditional
relevance assessments for blog posts. We conclude that crowdsourcing not
only appears to be a feasible, but also cheap and fast means to
generate relevance assessments. Furthermore, we detail our experiences
running the crowdsourced evaluation of the TREC Blog track, discuss the
lessons learned, and provide best practices.
Richard McCreadie, Craig Macdonald and Iadh Ounis.
Identifying Top News using Crowdsourcing.
Information Retrieval Journal, 2012
0 comments:
Post a Comment