Abstract:
We describe the overall organization of the CLEF 2003 evaluation campaign, with a particular focus on the cross-language ad hoc and domain-specific retrieval tracks. The paper discusses the evaluation approach adopted, describes the tracks and tasks offered and the test collections used, and provides an outline of the guidelines given to the participants. It concludes with an overview of the techniques employed for results calculation and analysis for the monolingual, bilingual and multilingual and GIRT tasks.
Information retrieval |
Natural language processing |
Artificial intelligence |
We have placed cookies on your device to help make this website and the services we offer better. By using this site, you agree to the use of cookies. Learn more