Multi-Task Learning Task

From GM-RKB
(Redirected from MTL)
Jump to navigation Jump to search

A Multi-Task Learning Task is a learning task that can solve several inference tasks.



References

2023

2019

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/multi-task_learning Retrieved:2015-3-22.
    • Multi-task learning (MTL) is an approach to machine learning that learns a problem together with other related problems at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the commonality among the tasks. [1] [2] [3] Therefore, multi-task learning is a kind of inductive transfer. This type of machine learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. [4] The goal of MTL is to improve the performance of learning algorithms by learning classifiers for multiple tasks jointly. This works particularly well if these tasks have some commonality and are generally slightly under sampled. One example is a spam-filter. Everybody has a slightly different distribution over spam or not-spam emails (e.g. all emails in Russian are spam for me — but not so for my Russian colleagues), yet there is definitely a common aspect across users. Multi-task learning works, because encouraging a classifier (or a modification thereof) to also perform well on a slightly different task is a better regularization than uninformed regularizers (e.g. to enforce that all weights are small). [5]
  1. Baxter, J. (2000). A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149--198, On-line paper
  2. Caruana, R. (1997). Multitask learning: A knowledge-based source of inductive bias. Machine Learning, 28:41--75. Paper at Citeseer
  3. Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. Paper at Citeseer
  4. http://www.cs.cornell.edu/~caruana/mlj97.pdf
  5. http://www.cse.wustl.edu/~kilian/research/multitasklearning/multitasklearning.html

1997