Tuesday, 20 November 2012

MapReduce indexing strategies: Studying scalability and efficiency

In Information Retrieval (IR), the efficient indexing of terabyte-scale and larger corpora is still a difficult problem. MapReduce has been proposed as a framework for distributing data-intensive operations across multiple processing machines. In this work, we provide a detailed analysis of four MapReduce indexing strategies of varying complexity. Moreover, we evaluate these indexing strategies by implementing them in an existing IR framework, and performing experiments using the Hadoop MapReduce implementation, in combination with several large standard TREC test corpora. In particular, we examine the efficiency of the indexing strategies, and for the most efficient strategy, we examine how it scales with respect to corpus size, and processing power. Our results attest to both the importance of minimising data transfer between machines for IO intensive tasks like indexing, and the suitability of the per-posting list MapReduce indexing strategy, in particular for indexing at a terabyte-scale. Hence, we conclude that MapReduce is a suitable framework for the deployment of large-scale indexing.

Richard McCreadie, Craig Macdonald, and Iadh Ounis.
MapReduce indexing strategies: Studying scalability and efficiency.
Information Processing and Management, Special Issue on Large Scale Distributed Systems.
DOI: 10.1016/j.ipm.2010.12.003



Post a Comment

newer post older post Home