Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
The effect of pool depth on system evaluation in TREC
Keenan S., Smeaton A., Keogh G. Journal of the American Society for Information Science52 (7):570-574,2001.Type:Article
Date Reviewed: Sep 1 2001

Information retrieval encompasses a wide variety of different research efforts pertaining to such highly business-oriented topics as knowledge management in companies. Nonetheless, the fundamental concepts include recall and precision when searching for information. Recall indicates how many relevant items were retrieved, compared to the total number of existing relevant items, whereas precision describes how many of the retrieved items are relevant.

The National Institute of Standards and Technology organized a number of experiments as part of the Text Retrieval Conference (TREC) that allow researchers to compare different algorithms and concepts with regard to recall and precision. Participants in TREC-6 have to process 50 queries, which they do not know in advance, based on a text corpus of 2GB. The competing algorithms each return the 1000 documents they deem to be relevant. Every algorithm can be run more than once. The top 100 documents of every run (this is the pool depth) are judged in terms of recall and precision.

Even if systems retrieve different documents, chances seem to be high that almost all relevant documents can be retrieved with a pool depth of 100. Keenan et al. discuss whether different pool depths would change anything for a single run, as opposed to how many relevant documents can be found in several runs. A good run, retrieving better results, should be continued longer than a bad run. The authors came to two conclusions: a pool depth of 100 is appropriate, as systems will probably have found all relevant documents they will ever find; and good systems can be recognized by looking at shorter runs in which they outperform weak systems.

Reviewer:  Edgar R. Weippl Review #: CR125326
Bookmark and Share
  Featured Reviewer  
 
Relevance Feedback (H.3.3 ... )
 
Would you recommend this review?
yes
no
Other reviews under "Relevance Feedback": Date
Finding statistics online
Berinstein P., Bjørner S., Information Today, Inc., Medford, NJ, 1998. Type: Book (9780910965255)
Nov 1 1998
Regions and levels: measuring and mapping users’ relevance judgments
Spink A., Greisdorf H. Journal of the American Society for Information Science 52(2): 161-173, 2001. Type: Article
Dec 1 2001
User perspectives on relevance criteria: a comparison among relevant, partially relevant , and not-relevant judgements
Maglaughlin K., Sonnenwald D. Journal of the American Society for Information Science 53(5): 327-342, 2002. Type: Article
Jul 24 2002
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy