Language Representation Learning Task
(Redirected from Language Representation Learning)
		
		
		
		Jump to navigation
		Jump to search
		A Language Representation Learning Task is a representation learning task that can learn language models.
- Context:
- It can be solved by a Language Representation Learning System by implementing a Language Representation Learning Algorithm.
 
 - Example(s):
- a Neural Language Representation Task,
 - a Biomedical Language Representation Task such as:
- a BioBERT Task;
 
 - a ERNIE Task.
 - …
 
 - Counter-Example(s):
 - See: Word-level Embedding, Character-level Embedding, Formal Language, Natural Language, Natural Language Processing Task, Natural Language Understanding Task.
 
References
2020
- (Lee et al., 2020) ⇒ Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang (2020). “BioBERT: A Pre-Trained Biomedical Language Representation Model For Biomedical Text Mining". In: Bioinformatics, 36(4), 1234-1240.
 
2019a
- (Bjerva et al., 2019) ⇒ Johannes Bjerva, Robert Ostling, Maria Han Veiga, Jorg Tiedemann, and Isabelle Augenstein (2019). “What do Language Representations Really Represent?". In: Computational Linguistics, 45(2), 381-389.
 
2019b
- (Zhang et al., 2019) ⇒ Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu (2019)."ERNIE: Enhanced Language Representation with Informative Entities".In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL). DOI:10.18653/v1/P19-1139.