{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "
\n", " ARTIFICIAL INTELLIGENCE (E016350A)
\n", "ALEKSANDRA PIZURICA
\n", "GHENT UNIVERSITY
\n", "AY 2024/2025
\n", "Assistant: Nicolas Vercheval\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Regularization - part I" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the theory class, you have seen the role of regularization in regression and classification problems and some common types of regularisation techniques, like $\\ell_1$ and $\\ell_2$ regularisation. We explained the linear least squares regression with $\\ell_2$ regularization (Ridge regression or Tikhonov regularization) and the linear least squares regression with $\\ell_1$ regularization (LASSO regression). Now, you experiment with these regularisation approaches and with a combined $\\ell_1-\\ell_2$ regularization (Elasticnet regression)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The following examples illustrate the standard regularizations that accompany regression models. Other libraries refer to a `penalty` parameter, which enables the setting of some regularization technique." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "from matplotlib import pyplot as plt \n", "from sklearn import linear_model, model_selection, metrics, datasets, preprocessing" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "np.random.seed(7)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use a set of data to predict real estate prices." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from sklearn.datasets import fetch_california_housing\n", "data = fetch_california_housing()\n", "df = pd.DataFrame(data.data, columns=data.feature_names)\n", "df['Target'] = data.target\n", "df" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This set has 8 attributes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Split the dataset and perform feature normalization\n", "\n", "Split the dataset using a train-test split of 2 : 1. Set the random state to 42.\n", "\n", "Perform feature normalization." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_train, X_test, y_train, y_test = # Your code here..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "scaler = # Your code here..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 1. Linear regression" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We use the simple linear regression model as the base model.\n", "\n", "#### Exercise: Train a linear regression model on the data" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "linear = # Your code here..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Model coefficients can be obtained via the `coef_` property." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "linear.coef_" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will monitor the performance of the model at the training set and the test set. We will use the coefficient of determination as a metric." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "linear_train_score = linear.score(X_train, y_train) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "linear_test_score = linear.score(X_test, y_test) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Training: ', linear_train_score, '\\nTesting: ', linear_test_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 2. Ridge regression (Tikhonov regularization)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recall from the theory class that linear regression with the square loss function and $\\ell_2$ regularisation is called Ridge regression or Tikhonov regularisation. We explained that in this case, the weights $w_i$ are determined by minimising the following cost function: $$\\|\\textbf{y}-\\textbf{Xw}\\|^2_2+\\lambda\\|\\textbf{w}\\|^2_2.$$ The $\\lambda$ parameter is a meta parameter that affects the strength of regularization. For large values of the $\\lambda$ parameter, the model is encouraged to have small coefficients. The coefficients obtained in this way can be close to zero, but they are rarely exactly zero because their squared value becomes so small that has little to no impact before that happens." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Working with linear regression models with $\\ell_2$ regularization is supported by the `scikit-learn` library via the `Ridge` class. The `alpha` parameter plays the role of the regularization hyperparameter $\\lambda $. Its values must be positive numbers." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Train the `Ridge` regression model on the training data\n", "\n", "Use `lambda_ridge` as the $\\lambda$ parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# alpha is a different notation for the \\lambda parameter (lambda is already a keyword!)\n", "lambda_ridge = 100\n", "ridge = # Your code here..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge.coef_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Squared sum of the coefficients without regularization:', (linear.coef_ ** 2).sum())\n", "print('Squared sum of the coefficients with Ridge regularization:', (ridge.coef_ ** 2).sum())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge_train_score = ridge.score(X_train, y_train) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "ridge_test_score = ridge.score(X_test, y_test) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Training: ', ridge_train_score, '\\nTesting: ', ridge_test_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 3. Lasso regression (linear regression with $\\ell_1$ regularization)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In contrast to the Tikhonov regularisation, LASSO (Least Absolute Shrinkage and Selection Operator) regularization adds the term $\\|w\\|_1= \\lambda\\sum\\limits_{i = 1}^{N}{|w_{i}|}$ to the squared error term of the regression model. The $\\lambda$ parameter is a meta parameter that affects the strength of regularization. Unlike ridge regression, such models can result in coefficients equal to zero." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Working with linear regression models with lasso regularization is supported by the `scikit-learn` library via the `Lasso` class. The `alpha` parameter plays the role of the regularization hyperparameter $\\lambda $. Its value must be a positive number." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Train the `Lasso` regression model on the training data\n", "\n", "Use `lambda_lasso` as the $\\lambda$ parameter." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_lasso = 0.01\n", "lasso = # Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lasso.coef_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Absolute sum of the coefficients without regularization:', abs(linear.coef_).sum())\n", "print('Absolute sum of the coefficients with Lasso regularization:', abs(lasso.coef_).sum())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lasso_train_score = lasso.score(X_train, y_train) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lasso_test_score = lasso.score(X_test, y_test) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Training: ', lasso_train_score, '\\nTesting: ', lasso_test_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 4. ElasticNet regression (linear regression with $\\ell_1$ and $\\ell_2$ regularization)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "`ElasticNet` is a type of regularization that combines $\\ell_1$ and $\\ell_2$ regularization. The regularization expression added to the model is $a\\cdot \\ell_1 + 0.5\\cdot b \\cdot \\ell_2$. For $a=0$, the expression corresponds to ridge regularization, while for $b=0$, the expression corresponds to lasso regularization. This type of regularization is supported by the `ElasticNet` function at the `scikit-learn` library level. The parameters `alpha` and `l1_ratio` are so that $\\alpha=a+b $ and $\\ell_1\\_ratio = \\frac{a}{a+b}$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Exercise: Train the `ElasticNet` regression model on the training data\n", "\n", "Use `lambda_elastic` as the $\\lambda$ parameter, and `l1_ratio` as $\\ell_1\\_ratio$." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "lambda_elastic = 0.005\n", "l1_ratio = 0.5\n", "elastic = # Your code here" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "elastic.coef_" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "elastic_train_score = elastic.score(X_train, y_train) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "elastic_test_score = elastic.score(X_test, y_test) " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Squared sum of the coefficients without regularization:', (linear.coef_ ** 2).sum())\n", "print('Squared sum of the coefficients with Elasticnet regularization:', (elastic.coef_ ** 2).sum())\n", "print('Absolute sum of the coefficients without regularization:', abs(linear.coef_).sum())\n", "print('Absolute sum of the coefficients with Elasticnet regularization:', abs(elastic.coef_).sum())" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print('Training: ', elastic_train_score, '\\nTesting: ', elastic_test_score)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Visualization of the model coefficients" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "number_of_features = len(data.feature_names)\n", "plt.figure(figsize=(10, 5))\n", "plt.xticks(np.arange(0, number_of_features), data.feature_names, rotation='horizontal')\n", "plt.plot(linear.coef_, '^', label='Without regularization' )\n", "plt.plot(ridge.coef_, 'o', label=f'Ridge regression (alpha = {lambda_ridge})')\n", "plt.plot(lasso.coef_, 'v', label=f'Lasso regresion (alpha = {lambda_lasso})')\n", "plt.plot(elastic.coef_, 'x', label=f'ElasticNet regression (alpha = {lambda_elastic})')\n", "plt.plot(np.arange(0, number_of_features), np.zeros(number_of_features), color='gray', linestyle='--')\n", "plt.legend(loc='best')\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The values of hyperparameters that occur in regularized models are determined in the same way as the hyperparameters of the models observed so far." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.9" } }, "nbformat": 4, "nbformat_minor": 4 }