Bilingual Evaluation Understudy (BLEU) Performance Measure
A Bilingual Evaluation Understudy (BLEU) Performance Measure is a precision-based n-gram-based text generation performance measure that evaluates generated texts against reference texts.
- AKA: BLEU Metric, BLEU Evaluation Metric, Bilingual Evaluation Understudy.
- Context:
- Metric Input: Generated Text Output, Reference Text Set, BLEU Configuration Parameters
- Metric Output: BLEU Score, BLEU Precision Value, BLEU Brevity Penalty, BLEU N-gram Precision
- Metric Performance Measure: BLEU Human Correlation Coefficients with human quality judgments
- ...
- It can typically calculate BLEU Modified Precision through BLEU clipped n-gram counting.
- It can typically apply BLEU Brevity Penalty through BLEU length ratio computation.
- It can typically compute BLEU Geometric Mean through BLEU n-gram precision averaging.
- It can typically evaluate BLEU Corpus-Level Scores through BLEU segment aggregation.
- ...
- It can often assess BLEU Text Fluency through BLEU higher-order n-grams.
- It can often measure BLEU Content Adequacy through BLEU unigram matching.
- It can often support BLEU Multi-Reference Evaluation through BLEU reference set comparison.
- It can often enable BLEU Statistical Significance Testing through BLEU bootstrap resampling.
- ...
- It can range from being a Document-Level BLEU Measure to being a Sentence-Level BLEU Measure, depending on its BLEU evaluation granularity.
- It can range from being a Case-Sensitive BLEU Measure to being a Case-Insensitive BLEU Measure, depending on its BLEU matching criteria.
- It can range from being a Tokenized BLEU Measure to being a Detokenized BLEU Measure, depending on its BLEU preprocessing method.
- It can range from being a Standard BLEU Measure to being a Smoothed BLEU Measure, depending on its BLEU zero-count handling.
- ...
- It can be implemented by a BLEU Evaluation System using BLEU scoring algorithms.
- It can be standardized through BLEU SacreBLEU Implementation for BLEU reproducible evaluation.
- It can be optimized in BLEU System Training through BLEU optimization objectives.
- It can be validated against BLEU Human Evaluation Studys using BLEU benchmark datasets.
- ...
- Example(s):
- BLEU Standard Variants, such as:
- BLEU-1, which measures BLEU unigram precision for BLEU word-level accuracy.
- BLEU-2, which evaluates BLEU bigram precision for BLEU phrase-level fluency.
- BLEU-3, which computes BLEU trigram precision for BLEU local coherence.
- BLEU-4, which calculates BLEU 4-gram precision for BLEU standard evaluation.
- BLEU Modified Variants, such as:
- iBLEU (Interactive BLEU), which supports BLEU interactive debugging and BLEU incremental scoring.
- SacreBLEU, which provides BLEU standardized tokenization for BLEU reproducible results.
- Self-BLEU, which measures BLEU text diversity in BLEU generation tasks.
- NIST Metric, which modifies BLEU n-gram weighting based on BLEU information content.
- BLEU Application-Specific Usages, such as:
- BLEU Machine Translation Evaluation for BLEU translation quality assessment.
- BLEU Image Captioning Evaluation for BLEU caption quality measurement.
- BLEU Dialogue Generation Evaluation for BLEU response quality scoring.
- BLEU Text Summarization Evaluation for BLEU summary quality assessment.
- BLEU Paraphrase Generation Evaluation for BLEU paraphrase quality measurement.
- the one initially proposed for machine translation in (Papineni et al., 2002).
- ...
- BLEU Standard Variants, such as:
- Counter-Example(s):
- ROUGE Metric, which emphasizes recall-based evaluation rather than BLEU precision-based evaluation.
- METEOR Metric, which includes synonym matching and stemming unlike BLEU exact matching.
- BERTScore, which uses contextual embeddings instead of BLEU surface-level n-grams.
- CIDEr Metric, which incorporates TF-IDF weighting for image captioning evaluation.
- Perplexity Measure, which evaluates language model quality rather than BLEU generation quality.
- See: Text Generation Evaluation, Machine Translation, Image Captioning, Dialogue Generation, Precision Metric, N-gram Matching, Automatic Evaluation.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/BLEU Retrieved:2018-8-27.
- BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
Scores are calculated for individual translated segments — generally sentences — by comparing them with a set of good quality reference translations. Those scores are then averaged over the whole corpus to reach an estimate of the translation's overall quality. Intelligibility or grammatical correctness are not taken into account.
BLEU’s output is always a number between 0 and 1. This value indicates how similar the candidate text is to the reference texts, with values closer to 1 representing more similar texts. Few human translations will attain a score of 1, since this would indicate that the candidate is identical to one of the reference translations. For this reason, it is not necessary to attain a score of 1. Because there are more opportunities to match, adding additional reference translations will increase the BLEU score.
- BLEU (bilingual evaluation understudy) is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Quality is considered to be the correspondence between a machine's output and that of a human: "the closer a machine translation is to a professional human translation, the better it is" – this is the central idea behind BLEU. BLEU was one of the first metrics to claim a high correlation with human judgements of quality, and remains one of the most popular automated and inexpensive metrics.
2017
- (Manning & Socher, 2017k) ⇒ Christopher Manning, and Richard Socher. (2017). “Lecture 11 - Further Topics in Neural Machine Translation and Recurrent Models.”
2011
- (Madnani, 2011) ⇒ Nitin Madnani. (2011). “iBLEU: Interactively Debugging and Scoring Statistical Machine Translation Systems.” In: Proceedings of the 5th IEEE International Conference on Semantic Computing (ICSC 2011). DOI: 10.1109/ICSC.2011.36.
2002
- (Papineni et al., 2002) ⇒ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. (2002). “Bleu: A Method for Automatic Evaluation of Machine Translation.” In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002). DOI:10.3115/1073083.1073135.