Meta-Learning Task
Jump to navigation
Jump to search
A Meta-Learning Task is a machine learning task that refers to the problem of training models to learn how to learn, by generalizing across a distribution of learning tasks.
- AKA: Meta-Learning Task, Learning-to-Learn Task, Few-Shot Generalization Task.
- Context:
- Task Input: A distribution of task datasets (each with support and query sets).
- Optional Input: Task-specific hyperparameters, base learner configuration.
- Task Output: A learning algorithm or model that performs well on unseen tasks.
- Task Performance Measures: Accuracy on query sets after adaptation, generalization gap, few-shot test error.
- Task Objective: To optimize the learning process so the resulting model adapts quickly and accurately to new tasks.
- It can be systematically solved and automated by a metalearning system.
- It can simulate the process of learning to adapt by training over multiple meta-train tasks.
- It can be instantiated through N-way K-shot learning, supervised regression tasks, or reinforcement learning environments.
- It can assess how quickly and accurately a system can generalize with limited data from novel tasks.
- ...
- Task Input: A distribution of task datasets (each with support and query sets).
- Example(s):
- Few-shot image classification task, which requires learning from only a few labeled examples per class.
- Meta-reinforcement learning task, where the agent must learn to solve new MDPs using prior episodes.
- Few-shot NLP task, such as slot-filling with minimal annotated samples.
- ...
- Counter-Example(s):
- Single-task learning problems, which do not involve task generalization.
- Transfer learning tasks, which rely on a fixed source-target task pair rather than a task distribution.
- ...
- See: Few-shot learning task, Multi-task learning task, Transfer learning, Metalearning System.