sklearn.linear model.LassoLars
		
		
		
		
		
		Jump to navigation
		Jump to search
		
		
	
A sklearn.linear model.LassoLars is an LASSO-LARS System within sklearn.linear_model class.
- Context:
- Usage:
 
 
- 1) Import LassoLars model from scikit-learn : 
from sklearn.linear_model import LassoLars - 2) Create design matrix 
Xand response vectorY - 3) Create LassoLars object: 
model=LassoLars([alpha=1.0, fit_intercept=True, verbose=False, normalize=True, precompute=’auto’,...]) - 4)  Choose method(s):
fit(X, y[, Xy]), fits the model using X, y as training data.get_params([deep]), gets parameters for this estimator.predict(X), predicts using the linear modelscore(X, y[, sample_weight]), returns the coefficient of determination R^2 of the prediction.set_params(**params), sets the parameters of this estimator.
 
- 1) Import LassoLars model from scikit-learn : 
 
| Input: | Output: | 
from sklearn import linear_model
  | 
LassoLars(alpha=0.1, copy_X=True, eps=..., fit_intercept=True, fit_path=True, max_iter=500, normalize=True, positive=False, precompute='auto', verbose=False)
  | 
- Counter-Example(s):
 - See: Regression System, Regressor, Cross-Validation Task, Ridge Regression Task, Bayesian Analysis.
 
References
2017A
- (Scikit Learn, 2017)http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LassoLars.html
- QUOTE: 
class sklearn.linear_model.LassoLars(alpha=1.0, fit_intercept=True, verbose=False, normalize=True, precompute=’auto’, max_iter=500, eps=2.2204460492503131e-16, copy_X=True, fit_path=True, positive=False) 
 - QUOTE: 
 
- Lasso model fit with Least Angle Regression a.k.a. Lars
 - It is a Linear Model trained with an L1 prior as regularizer.
 - The optimization objective for Lasso is:
 (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
2017B
- (Scikit Learn) ⇒ "1.1.8. LARS Lasso" http://scikit-learn.org/stable/modules/linear_model.html#lars-lasso Retrieved:2017-11-05
- QUOTE: 
LassoLarsis a lasso model implemented using the LARS algorithm, and unlike the implementation based on coordinate_descent, this yields the exact solution, which is piecewise linear as a function of the norm of its coefficients. 
 - QUOTE: 
 
- (...)
 - The algorithm is similar to forward stepwise regression, but instead of including variables at each step, the estimated parameters are increased in a direction equiangular to each one’s correlations with the residual.
 - Instead of giving a vector result, the LARS solution consists of a curve denoting the solution for each value of the L1 norm of the parameter vector. The full coefficients path is stored in the array 
coef_path_, which has size(n_features, max_features+1). The first column is always zero.