Pretrained Language Model (LM)
(Redirected from Pre-trained language model)
		
		
		
		Jump to navigation
		Jump to search
		A Pretrained Language Model (LM) is a language model that is a pre-trained model.
- Context:
- It can range from (typically) being a Large Pretrained Language Model (LM) to being a Small Pretrained Language Model (LM).
 - It can range from being a Base Pretrained LM to being a Fine-Tuned Pretrained LM.
 - ...
 
 - Example(s):
- a Pretrained Large LM (LLM), such as: GPT-1.
 - a Base Language Model (BLM), such as: GPT-1.
 - a Finetuned Language Model, such as: GPT-3.5.
 - …
 
 - Counter-Example(s):
 - See: Language Model Metamodel, ULMFiT.