- (Doddington et al., 2004) ⇒ George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. (2004). “The Automatic Content Extraction (ACE) Program – Tasks, Data, and Evaluation.” In: Proceedings of Conference on Language Resources and Evaluation (LREC 2004).
- (Doddington, 2002) ⇒ George Doddington. (2002). “Automatic Evaluation of Machine Translation Quality Using n-Gram Co-occurrence Statistics. In: Proceedings of the Second International Conference on Human Language Technology Research (HLT 2002).
- ABSTRACT: Evaluation is recognized as an extremely helpful forcing function in Human Language Technology R&D. Unfortunately, evaluation has not been a very powerful tool in machine translation (MT) research because it requires human judgments and is thus expensive and time-consuming and not easily factored into the MT research agenda. However, at the July 2001 TIDES PI meeting in Philadelphia, IBM described an automatic MT evaluation technique that can provide immediate feedback and guidance in MT research. Their idea, which they call an "evaluation understudy", compares MT output with expert reference translations in terms of the statistics of short sequences of words (word N-grams). The more of these N-grams that a translation shares with the reference translations, the better the translation is judged to be. The idea is elegant in its simplicity. But far more important, IBM showed a strong correlation between these automatically generated scores and human judgments of translation quality. As a result, DARPA commissioned NIST to develop an MT evaluation facility based on the IBM work. This utility is now available from NIST and serves as the primary evaluation measure for TIDES MT research.
- (Allan et al., 1998) ⇒ James Allan, Jaime Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. (1998). “Topic Detection and Tracking Pilot Study: Final report.” In: Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop.