Empirical Loss Function: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replace - " (2008)" to " (2008)")
 
m (Text replacement - "<B><U>AKA</U>:</B>" to "<U>AKA</U>:")
Line 1: Line 1:
An [[Empirical Loss Function]] is a [[Function]] that quantifies the [[Error]]s over a [[Training Data Set]].
An [[Empirical Loss Function]] is a [[Function]] that quantifies the [[Error]]s over a [[Training Data Set]].
* <B><U>AKA</U>:</B> [[Empirical Loss]].
* <U>AKA</U>: [[Empirical Loss]].
* <B><U>Counter-Example(s)</U>:</B>  
* <B><U>Counter-Example(s)</U>:</B>  
** [[Expected Loss Function]] (over the [[Training Set]] and [[Testing Set]].
** [[Expected Loss Function]] (over the [[Training Set]] and [[Testing Set]].

Revision as of 21:36, 17 August 2014

An Empirical Loss Function is a Function that quantifies the Errors over a Training Data Set.



References

2009

  • (Chen & al, 2009) ⇒ Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. (2009). "Extracting Discrimininative Concepts for Domain Adaptation in Text Mining." In: Proceedings of ACM SIGKDD Conference (KDD 2009). doi:10.1145/1557019.1557045
    • ... we propose a domain adaptation method that parameterizes this concept space by linear transformation under which we explicitly minimize the distribution difference between the source domain with sufficient labeled data and target domains with only unlabeled data, while at the same time minimizing the empirical loss on the labeled data in the source domain.

2008

2004

  • (Zhao & Yu, 2004) ⇒ Peng Zhao, and Bin Yu. (2004). "Boosted Lasso." Tech Report, Statistics Department, U. C. Berkeley.
    • ... The motivation comes from a critical observation that both FSF and Boosting only work in a forward fashion (so is FSF named). They always take steps that reduce empirical loss the most regardless of the impact on model complexity (or the L1 penalty in the Lasso case).