Venue: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL '02
Type: Publication
Abstract: Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. We propose a method of automatic machine translation evaluation that is quick, inexpensive, and language-independent, that correlates highly with human evaluation, and that has little marginal cost per run. We present this method as an automated understudy to skilled human judges which substitutes for them when there is need for quick or frequent evaluations.
Topics: Artificial intelligenceNatural language processingMachine learning
Popularity: This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the
underlying citation network.
Influence: This indicator reflects the overall/total impact of an article in the research community at large, based on the
underlying citation network (diachronically).
Citation Count: This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in
the research community at large, based on the underlying citation network (diachronically).
Impulse: This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation
network.
We have placed cookies on your device to help make this website and the services we offer better. By using this site, you agree to the use of cookies. Learn more