2011 IBLEUInteractivelyDebuggingandS

From GM-RKB
Jump to navigation Jump to search

Subject Headings: iBLEU Metric, BLEU Metric.

Notes

Cited By

Quotes

Abstract

Machine Translation (MT) systems are evaluated and debugged using the BLEU automated metric. However, the current community implementation of BLEU is not ideal for MT system developers and researchers since it only produces textual information. I present a novel tool called iBLEU that organizes BLEU scoring information in a visual and easy-to-understand manner, making it easier for MT system developers & researchers to quickly locate documents and sentences on which their system performs poorly. It also allows comparing translations from two different MT systems. Furthermore, one can also choose to compare to the publicly available MT systems, e.g., Google Translate and Bing Translator, with a single click. It can run on all major platforms and requires no setup whatsoever.

References

BibTeX

@inproceedings{2011_IBLEUInteractivelyDebuggingandS,
  author    = {Nitin Madnani},
  title     = {iBLEU: Interactively Debugging and Scoring Statistical Machine Translation
               Systems},
  booktitle = {Proceedings of the 5th IEEE International Conference on Semantic
               Computing (ICSC 2011)},
  address   = {Palo Alto, CA, USA},
  pages     = {213--214},
  publisher = {IEEE Computer Society},
  year      = {2011},
  month     = {September},
  url       = {https://doi.org/10.1109/ICSC.2011.36},
  doi       = {10.1109/ICSC.2011.36},
}


 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
2011 IBLEUInteractivelyDebuggingandSNitin MadnaniiBLEU: Interactively Debugging and Scoring Statistical Machine Translation Systems2011