Domain Adaptable Learning Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replace - " (2010)" to " (2010)")
 
m (Text replacement - "<B><U>AKA</U>:</B>" to "<U>AKA</U>:")
Line 1: Line 1:
A [[Domain Adaptable Learning Algorithm|domain adaptable learning algorithm]] is a [[Learning Algorithm|learning algorithm]] that can solve a [[Domain Adaptable Learning Task]]
A [[Domain Adaptable Learning Algorithm|domain adaptable learning algorithm]] is a [[Learning Algorithm|learning algorithm]] that can solve a [[Domain Adaptable Learning Task]]
* <B><U>AKA</U>:</B> [[Transfer Learning Algorithm]].
* <U>AKA</U>: [[Transfer Learning Algorithm]].
* <B><U>Context</U>:</B>
* <B><U>Context</U>:</B>
** It can (typically) be a [[Fully-Supervised Learning Algorithm]].
** It can (typically) be a [[Fully-Supervised Learning Algorithm]].

Revision as of 21:34, 17 August 2014

A domain adaptable learning algorithm is a learning algorithm that can solve a Domain Adaptable Learning Task



References

2010

2009

  • (Chen & al, 2009) ⇒ Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. (2009). "Extracting Discrimininative Concepts for Domain Adaptation in Text Mining." In: Proceedings of ACM SIGKDD Conference (KDD 2009). doi:10.1145/1557019.1557045
    • ... Several domain adaptation methods have been proposed to learn a reasonable representation so as to make the distributions between the source domain and the target domain closer [3, 12, 13, 11].

2008

  • (Pan & al, 2008) ⇒ S. J. Pan, J. T. Kwok, and Q. Yang. (2008). "Transfer Learning via Dimensionality Reduction." In: Proceedings of the 23rd AAAI conference on Artificial Intelligence.

2007

  • (Daumé III, 2007) ⇒ Hal Daumé III. (2007). "Frustratingly Easy Domain Adaptation." In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007).
  • (Raina & al, 2007) ⇒ R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. (2007). "Self-Taught Learning: Transfer learning from unlabeled data." In: Proceedings of the 24th Annual International Conference on Machine Learning (ICML 2007).
  • (Satpal & Sarawagi, 2007) ⇒ S. Satpal and Sunita Sarawagi. (2007). "Domain Adaptation of Conditional Probability Models via Feature Subsetting." In: Proceedings of European Conference on Principles and Practice of Knowledge Discovery in Databases.

2006

  • (Blitzer & al, 2006) ⇒ J. Blitzer, R. McDonald, and Fernando Pereira. (2006). "Domain Adaptation with Structural Correspondence Learning." In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2006).
  • (Daumé III & Marcu, 2006) ⇒ Hal Daumé III, and Daniel Marcu. (2006). "Domain Adaptation for Statistical Classifiers." In: Journal of Artificial Intelligence Research, 26 (JAIR 26).
    • The most basic assumption used in statistical learning theory is that training data and test data are drawn from the same underlying distribution. Unfortunately, in many applications, the "in-domain" test data is drawn from a distribution that is related, but not identical, to the "out-of-domain" distribution of the training data. We consider the common case in which labeled out-of-domain data is plentiful, but labeled in-domain data is scarce. We introduce a statistical formulation of this problem in terms of a simple mixture model and present an instantiation of this framework to maximum entropy classifiers and their linear chain counterparts. We present efficient inference algorithms for this special case based on the technique of conditional expectation maximization. Our experimental results show that our approach leads to improved performance on three real world tasks on four different data sets from the natural language processing domain.