Abstract: We describe the overall organization of the CLEF 2003 evaluation campaign, with a particular focus on the cross-language ad hoc and domain-specific retrieval tracks. The paper discusses the evaluation approach adopted, describes the tracks and tasks offered and the test collections used, and provides an outline of the guidelines given to the participants. It concludes with an overview of the techniques employed for results calculation and analysis for the monolingual, bilingual and multilingual and GIRT tasks.
Topics: Natural language processingArtificial intelligenceInformation retrieval
Popularity: This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the
underlying citation network.
Influence: This indicator reflects the overall/total impact of an article in the research community at large, based on the
underlying citation network (diachronically).
Citation Count: This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in
the research community at large, based on the underlying citation network (diachronically).
Impulse: This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation
network.
We have placed cookies on your device to help make this website and the services we offer better. By using this site, you agree to the use of cookies. Learn more