Empirical challenges and solutions in constructing a high-performance metasearch engine
Purpose ‐ This paper seeks to disclose the important role of missing documents, broken links and duplicate items in the results merging process of a metasearch engine in detail. It aims to investigate some related practical challenges and proposes some solutions. The study
also aims to employ these solutions to improve an existing model for results aggregation. Design/methodology/approach ‐ This research measures the amount of an increase in retrieval effectiveness of an existing results merging model that is obtained as a result of the proposed
improvements. The 50 queries of the 2002 TREC web track were employed as a standard test collection based on a snapshot of the worldwide web to explore and evaluate the retrieval effectiveness of the suggested method. Three popular web search engines (Ask, Bing and Google) as the underlying
resources of metasearch engines were selected. Each of the 50 queries was passed to all three search engines. For each query the top ten non-sponsored results of each search engine were retrieved. The returned result lists of the search engines were aggregated using a proposed algorithm that
takes the practical issues of the process into consideration. The effectiveness of the result lists generated was measured using a well-known performance indicator called "TSAP" (TREC-style average precision). Findings ‐ Experimental results demonstrate that the proposed model
increases the performance of an existing results merging system by 14.39 percent on average. Practical implications ‐ The findings of this research would be helpful for metasearch engine designers as well as providing motivation to the vendors of web search engines to improve
their technology. Originality/value ‐ This study provides some valuable concepts, practical challenges, solutions and experimental results in the field of web metasearching that have not been previously investigated.