
- Scikit Learn - Home
- Scikit Learn - Introduction
- Scikit Learn - Modelling Process
- Scikit Learn - Data Representation
- Scikit Learn - Estimator API
- Scikit Learn - Conventions
- Scikit Learn - Linear Modeling
- Scikit Learn - Extended Linear Modeling
- Stochastic Gradient Descent
- Scikit Learn - Support Vector Machines
- Scikit Learn - Anomaly Detection
- Scikit Learn - K-Nearest Neighbors
- Scikit Learn - KNN Learning
- Classification with Naïve Bayes
- Scikit Learn - Decision Trees
- Randomized Decision Trees
- Scikit Learn - Boosting Methods
- Scikit Learn - Clustering Methods
- Clustering Performance Evaluation
- Dimensionality Reduction using PCA
- Scikit Learn Useful Resources
- Scikit Learn - Quick Guide
- Scikit Learn - Useful Resources
- Scikit Learn - Discussion
Scikit Learn - Elastic-Net
The Elastic-Net is a regularised regression method that linearly combines both penalties i.e. L1 and L2 of the Lasso and Ridge regression methods. It is useful when there are multiple correlated features. The difference between Lass and Elastic-Net lies in the fact that Lasso is likely to pick one of these features at random while elastic-net is likely to pick both at once.
Sklearn provides a linear model named ElasticNet which is trained with both L1, L2-norm for regularisation of the coefficients. The advantage of such combination is that it allows for learning a sparse model where few of the weights are non-zero like Lasso regularisation method, while still maintaining the regularization properties of Ridge regularisation method.
Following is the objective function to minimise −
$$\displaystyle\min\limits_{w}\frac{1}{2n_{samples}}\lVert X_{w}-Y\rVert_2^2+\alpha\rho\lVert W\rVert_1+\frac{\alpha\lgroup 1-\rho\rgroup}{2}\ \lVert W\rVert_2^2$$Parameters
Following table consist the parameters used by ElasticNet module −
Sr.No | Parameter & Description |
---|---|
1 |
alpha − float, optional, default = 1.0 Alpha, the constant that multiplies the L1/L2 term, is the tuning parameter that decides how much we want to penalize the model. The default value is 1.0. |
2 |
l1_ratio − float This is called the ElasticNet mixing parameter. Its range is 0 < = l1_ratio < = 1. If l1_ratio = 1, the penalty would be L1 penalty. If l1_ratio = 0, the penalty would be an L2 penalty. If the value of l1 ratio is between 0 and 1, the penalty would be the combination of L1 and L2. |
3 |
fit_intercept − Boolean, optional. Default=True This parameter specifies that a constant (bias or intercept) should be added to the decision function. No intercept will be used in calculation, if it will set to false. |
4 |
tol − float, optional This parameter represents the tolerance for the optimization. The tol value and updates would be compared and if found updates smaller than tol, the optimization checks the dual gap for optimality and continues until it is smaller than tol. |
5 |
normalise − Boolean, optional, default = False If this parameter is set to True, the regressor X will be normalised before regression. The normalisation will be done by subtracting the mean and dividing it by L2 norm. If fit_intercept = False, this parameter will be ignored. |
6 |
precompute − True|False|array-like, default=False With this parameter we can decide whether to use a precomputed Gram matrix to speed up the calculation or not. To preserve sparsity, it would always be true for sparse input. |
7 |
copy_X − Boolean, optional, default = True By default, it is true which means X will be copied. But if it is set to false, X may be overwritten. |
8 |
max_iter − int, optional As name suggest, it represents the maximum number of iterations taken for conjugate gradient solvers. |
9 |
warm_start − bool, optional, default = false With this parameter set to True, we can reuse the solution of the previous call to fit as initialisation. If we choose default i.e. false, it will erase the previous solution. |
10 |
random_state − int, RandomState instance or None, optional, default = none This parameter represents the seed of the pseudo random number generated which is used while shuffling the data. Following are the options −
|
11 |
selection − str, default=cyclic
|
Attributes
Followings table consist the attributes used by ElasticNet module −
Sr.No | Attributes & Description |
---|---|
1 |
coef_ − array, shape (n_tasks, n_features) This attribute provides the weight vectors. |
2 |
Intercept_ − array, shape (n_tasks) It represents the independent term in decision function. |
3 |
n_iter_ − int It gives the number of iterations run by the coordinate descent solver to reach the specified tolerance. |
Implementation Example
Following Python script uses ElasticNet linear model which further uses coordinate descent as the algorithm to fit the coefficients −
from sklearn import linear_model ENreg = linear_model.ElasticNet(alpha = 0.5,random_state = 0) ENreg.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2])
Output
ElasticNet(alpha = 0.5, copy_X = True, fit_intercept = True, l1_ratio = 0.5, max_iter = 1000, normalize = False, positive = False, precompute=False, random_state = 0, selection = 'cyclic', tol = 0.0001, warm_start = False)
Example
Now, once fitted, the model can predict new values as follows −
ENregReg.predict([[0,1]])
Output
array([0.73686077])
Example
For the above example, we can get the weight vector with the help of following python script −
ENreg.coef_
Output
array([0.26318357, 0.26313923])
Example
Similarly, we can get the value of intercept with the help of following python script −
ENreg.intercept_
Output
0.47367720941913904
Example
We can get the total number of iterations to get the specified tolerance with the help of following python script −
ENreg.n_iter_
Output
15
We can change the values of alpha (towards 1) to get better results from the model.
Example
Let us see same example with alpha = 1.
from sklearn import linear_model ENreg = linear_model.ElasticNet(alpha = 1,random_state = 0) ENreg.fit([[0,0], [1, 1], [2, 2]], [0, 1, 2]) Output ElasticNet(alpha = 1, copy_X = True, fit_intercept = True, l1_ratio = 0.5, max_iter = 1000, normalize = False, positive = False, precompute = False, random_state = 0, selection = 'cyclic', tol = 0.0001, warm_start = False) #Predicting new values ENreg.predict([[1,0]]) Output array([0.90909216]) #weight vectors ENreg.coef_ Output array([0.09091128, 0.09090784]) #Calculating intercept ENreg.intercept_ Output 0.818180878658411 #Calculating number of iterations ENreg.n_iter_ Output 10
From the above examples, we can see the difference in the outputs.