Tuesday, 20 November 2012

CrowdTerrier: Automatic Crowdsourced Relevance Assessments with Terrier

Information retrieval (IR) systems rely on document relevance assessments for queries to gauge their effectiveness for a variety of tasks, e.g. Web result ranking. Evaluation forums such as TREC and CLEF provide relevance assessments for common tasks. However, it is not possible for such venues to cover all of the collections and tasks currently investigated in IR. Hence, it falls to the individual researchers to generate the relevance assessments for new tasks and/or collections. Moreover, relevance assessment generation can be a time-consuming, difficult and potentially costly process. Recently, crowdsourcing has been shown to be a fast and cheap method to generate relevance assessments in a semi-automatic manner [1]. In this case, the relevance assessment task is outsourced to a large group of non-expert workers, where workers are rewarded via micro-payments....

Richard McCreadie, Craig Macdonald and Iadh Ounis.
CrowdTerrier: Automatic Crowdsourced Relevance Assessments with Terrier.
In Proceedings of SIGIR 2012

PDF
Bibtex

0 comments:

Post a Comment

newer post older post Home