Sparse Learning Task

From GM-RKB
(Redirected from sparse learning)
Jump to navigation Jump to search

See: Sparse Learning.



References

2008

  • (Friedman et al., 2008) ⇒ Jerome Friedman, Trevor Hastie, and Robert Tibshirani. (2008). “Sparse Inverse Covariance Estimation with the Graphical Lasso.” In: Biostatistics, 9(3). (doi:10.1093/biostatistics/kxm045).
    • We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm — the graphical lasso — that is remarkably fast: It solves a 1000-node problem (~500000 parameters) in at most a minute and is 30–4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
    • We simulated Gaussian data from both sparsev and dense scenarios, for a range of problem sizes p. The sparse scenario is the AR(1) model taken from Yuan & Lin (2007). “βii=1, βii-1 = βi-1i = 0.5, and zero otherwise. In the dense scenario, βii = 2, βii' = 1 otherwise.

2007

  • M. Yuan, and Y. Lin. (2007). “Model Selection and Estimation in the Gaussian Graphical Model.” In: Biometrika 94(1).

2005