Solution Method#
Certainly! Let’s go through the Ordinary Least Squares (OLS) regression step by step, focusing on deriving the solution approach in detail.
Ordinary Least Squares (OLS) Regression#
Objective: The goal of OLS regression is to find the best-fitting linear relationship between a set of input features \( \mathbf{X} \) and a target variable \( \mathbf{y} \). The best-fitting line is determined by minimizing the sum of squared differences between the observed values and the predicted values.
1. Formulation of the Problem#
Given:
A dataset with \( n \) observations and \( d \) features.
Input features matrix \( \mathbf{X} \) of size \( n \times d \), where each row represents an observation and each column represents a feature.
Target vector \( \mathbf{y} \) of size \( n \times 1 \).
Model: The linear model is expressed as: $\( \mathbf{y} = \mathbf{X} \boldsymbol{\beta} + \mathbf{e} \)$ where:
\( \mathbf{X} \) is the \( n \times d \) matrix of input features.
\( \boldsymbol{\beta} \) is the \( d \times 1 \) vector of coefficients.
\( \mathbf{e} \) is the \( n \times 1 \) vector of errors or residuals.
The objective is to estimate the coefficients \( \boldsymbol{\beta} \) that minimize the sum of squared residuals.
2. Objective Function#
The objective is to minimize the Residual Sum of Squares (RSS):
where \( \| \cdot \|^2 \) denotes the squared Euclidean norm, which is:
3. Expanding the Objective Function#
Expand the RSS expression:
Here:
\( \mathbf{y}^T \mathbf{y} \) is a constant with respect to \( \boldsymbol{\beta} \).
\( -2 \mathbf{y}^T \mathbf{X} \boldsymbol{\beta} \) is the linear term.
\( \boldsymbol{\beta}^T \mathbf{X}^T \mathbf{X} \boldsymbol{\beta} \) is the quadratic term.
4. Finding the Minimum#
To find the minimum of the RSS, we take the derivative of the RSS with respect to \( \boldsymbol{\beta} \) and set it to zero:
Simplify the equation:
5. Solving for \( \boldsymbol{\beta} \)#
To solve for \( \boldsymbol{\beta} \), we rearrange the equation:
Here, \( (\mathbf{X}^T \mathbf{X})^{-1} \) is the matrix inverse of \( \mathbf{X}^T \mathbf{X} \), which exists if \( \mathbf{X}^T \mathbf{X} \) is non-singular (i.e., has full rank).
6. Solution Summary#
The closed-form solution for the OLS regression coefficients \( \boldsymbol{\beta} \) is:
7. Additional Considerations#
Regularization: In practice, OLS might be augmented with regularization (e.g., Ridge Regression) to handle multicollinearity or overfitting.
Singularity: If \( \mathbf{X}^T \mathbf{X} \) is singular, pseudoinverse techniques (e.g., Moore-Penrose pseudoinverse) can be used.
8. Example#
Let’s walk through a simple example with two features.
Given:
\( \mathbf{X} = \begin{bmatrix} 1 & 2 \\ 2 & 3 \\ 3 & 4 \end{bmatrix} \)
\( \mathbf{y} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \)
Step-by-Step Solution:
Compute \( \mathbf{X}^T \mathbf{X} \):
Compute \( \mathbf{X}^T \mathbf{y} \):
Compute \( (\mathbf{X}^T \mathbf{X})^{-1} \):
Compute \( \hat{\boldsymbol{\beta}} \):
Thus, the estimated coefficients are \( \hat{\beta}_1 = 2 \) and \( \hat{\beta}_2 = 1 \).
This completes the detailed derivation and solution approach for OLS regression.
In fact, \( \text{prox}_f(v) \) is the point that has the least distance to the vector \( v \) relative to the function \( f \). That is why it is called the proximal mapping. Therefore, \( \text{prox}_f(v) \) is a point between the minimum of \( f \) and \( v \).
The number \( \lambda \) is a control parameter. For large \( \lambda \), since the term \( \frac{1}{2\lambda} \| x_{n-1} - v \|^2 \) becomes larger, \( x_n \) is closer to the minimum of \( f \) and farther from \( v \). Conversely, for small \( \lambda \), since the term \( \frac{1}{2\lambda} \| x_{n-1} - v \|^2 \) becomes smaller, \( x_n \) is closer to \( v \) and farther from the minimum of \( f \).
In iterative algorithms for moving towards the optimal \( f(x) \), we are generally committed to the previous point, similar to gradient descent. For example, in gradient descent \( x_{n+1} = x_n - \alpha \nabla f(x_n) \), we combine the gradient of the function and the previous point to reach the next point. The same is true for the proximal algorithm.
Assuming \( f \) is smooth (differentiable), we consider using explicit methods, such as the gradient descent method, to approximate the minimizer. The gradient descent method is a common technique in convex optimization. By selecting a smaller parameter \( \lambda \), the term \( \frac{1}{2\lambda} \| x_{n-1} - v \|^2 \) becomes more significant. Consequently, the term \( \frac{1}{2\lambda} \| x - x_{n-1} \|^2 \) increases, enhancing the convergence rate and reducing the number of steps required to solve the problem using the gradient method.
Thus, in the proximal point algorithm, minimizing the parameter \( \lambda \) as much as possible simplifies the problems and improves their convergence rate. However, this approach results in a slower overall convergence rate to the minimizer of the main objective function.
The proximal point algorithm, in other words, is a discrete form of the subgradient descent method for nonsmooth optimization.
Projection Operator onto a Set \( C \)#
The projection operator \( P_C(v) \) is used to project a point \( v \) onto a set \( C \). It is defined as:
This projection can also be expressed using the indicator function \( I_C(x) \) of the set \( C \):
where the indicator function \( I_C(x) \) is defined as:
Proximal Gradient Algorithm#
The proximal gradient algorithm is used to minimize the problem:
where \( f : H \to \mathbb{R} \) is a convex and differentiable function, and \( g : H \to (-\infty, +\infty] \) is a convex, proper, and lower semicontinuous function. For example, the constrained minimization problem:
can be rewritten as the unconstrained minimization problem:
where \( I_C(x) \) is the indicator function of the set \( C \). The proximal gradient algorithm combines steepest descent for \( f \) and the proximal step for \( g \):
where \( \lambda_k > 0 \) is the step size. The proximal operator \( \text{prox}_{\lambda_k g} \) is used to handle the non-smooth term \( g \) in the optimization problem.
Conceptual Combination#
The proximal gradient algorithm leverages the concept of the projection operator \( P_C(x) \). Specifically, the proximal operator for the indicator function \( I_C \) is equivalent to the projection operator:
This equivalence implies that the proximal gradient algorithm effectively incorporates the projection operator into its iterative steps. By solving:
the algorithm ensures that each step minimizes both \( f \) and \( g \), leveraging the convexity and differentiability of the functions involved. Thus, the proximal gradient algorithm combines the steepest descent method for \( f \) with the projection operator to handle constraints, achieving convergence towards a solution of the optimization problem.
Here’s a translated and refined version of the provided text that integrates the concepts of the proximal gradient algorithm and the projection operator, making it suitable for English-speaking audiences:
The proximal gradient algorithm addresses the minimization problem:
where \( f : H \to \mathbb{R} \) is a convex and differentiable function, and \( g : H \to (-\infty, +\infty] \) is a convex, proper, and lower semicontinuous function. An example of such problems occurs when the objective function can be decomposed into the sum of two functions, one of which is differentiable (in this case, \( f \)). For example, the constrained minimization problem:
where \( C \) is closed and convex, can be transformed into the unconstrained minimization problem:
where \( I_C \) is the indicator function of the set \( C \). The proximal gradient algorithm for this problem is obtained by combining the steepest descent method for \( f \) with the proximal step for \( g \):
where \( \lambda_k > 0 \) denotes the step size. The appropriate choice of parameters \( \lambda_k \) ensures weak convergence of the sequence to a minimizer of \( f + g \). It is evident that:
Thus, the proximal gradient algorithm for the constrained problem takes the form:
This algorithm is also known as the splitting algorithm because it decomposes the objective function into a sum of two convex functions.
Additionally, the proximal gradient algorithm can be interpreted as an iterative method for approximating a fixed point. Specifically, \( x^* \) is a solution to the problem if and only if it is a fixed point of the operator:
where \( I \) is the identity operator. This means the algorithm iteratively applies this operator to converge to a solution.
This explanation integrates the role of the projection operator and the proximal gradient algorithm in optimization, providing a clear and comprehensive overview.
Text 1: Projection Operator onto a Set \( C \)#
The projection operator \( P_C(v) \) projects a point \( v \) onto a set \( C \) and is defined as:
This can also be expressed using the indicator function \( I_C(x) \) of set \( C \):
where \( I_C(x) \) is defined as:
Text 2: Proximal Gradient Algorithm#
Let’s consider the proximal gradient algorithm for minimizing the problem:
where \( f : H \to \mathbb{R} \) is a convex and differentiable function, and \( g : H \to (-\infty, +\infty] \) is a convex, proper, and lower semicontinuous function. For instance, the constrained minimization problem:
if \( C \) is closed and convex, is equivalent to the unconstrained minimization problem:
where \( I_C(x) \) is the indicator function of \( C \). The proximal gradient algorithm combines the steepest descent for \( f \) and the proximal step for \( g \):
where \( \lambda_k > 0 \) is the step size. The proximal operator \( \text{prox}_{\lambda_k g} \) is related to \( g \) and \( \lambda_k \).
The proximal gradient algorithm can be viewed as iterating towards a fixed point. Specifically, \( x^* \) solves the problem if it is a fixed point of:
where \( I \) is the identity operator.
Conceptual Combination:#
The proximal gradient algorithm leverages the projection operator \( P_C(x) \) onto set \( C \), where \( P_C(x) = \text{prox}_{I_C}(x) \). This operator \( P_C \) helps in solving constrained minimization problems effectively by ensuring each step minimizes both \( f \) and \( g \), incorporating the properties of convexity and differentiability to achieve convergence towards a solution.
Proximal Mapping: A Comprehensive Overview#
Proximal mapping is a crucial concept in optimization, especially in problems involving non-differentiable terms or constraints. It plays a significant role in modern optimization techniques, such as those used in machine learning, signal processing, and statistics.
1. Definition of Proximal Mapping#
The proximal mapping of a function \( f \) at a point \( x \) is defined as the solution to the following optimization problem:
where:
\( \lambda \) is a positive scalar (regularization parameter).
\( \| \cdot \| \) denotes the Euclidean norm (or another norm, depending on context).
The term \( \frac{1}{2\lambda} \| u - x \|^2 \) represents a quadratic regularization that penalizes deviations of \( u \) from \( x \). The function \( f(u) \) is typically non-smooth or non-differentiable, and the proximal operator provides a way to handle such non-smoothness.
2. Geometric Interpretation#
The proximal mapping can be thought of as finding a point \( u \) that balances:
The value of the function \( f(u) \), which is often non-differentiable.
The squared distance \( \frac{1}{2\lambda} \| u - x \|^2 \) from the point \( x \).
This trade-off can be interpreted as a form of regularization where the function \( f \) is smoothed out or approximated by adding a quadratic penalty.
3. Applications of Proximal Mapping#
Proximal mapping is used in various optimization problems, particularly where:
Regularization: There are regularization terms (e.g., \( L_1 \) norm) that are not differentiable.
Constraint Handling: Problems involve constraints that can be handled using proximal operators.
Sparse Solutions: Techniques like Lasso regression utilize proximal operators to induce sparsity.
4. Examples of Proximal Operators#
Proximal Operator for \( L_1 \) Norm (Soft-Thresholding)
For \( f(u) = \| u \|_1 \), the proximal operator is:
\[ \text{prox}_{\lambda \| \cdot \|_1}(x) = \text{sgn}(x) \cdot \max(|x| - \lambda, 0) \]This is used in Lasso regression, where it promotes sparsity in the solution by shrinking coefficients towards zero.
Proximal Operator for \( L_2 \) Norm
For \( f(u) = \frac{1}{2} \| u \|_2^2 \), the proximal operator is:
\[ \text{prox}_{\lambda \frac{1}{2} \| \cdot \|_2^2}(x) = x \]This is used in Ridge regression, where the quadratic penalty does not change the solution but serves to shrink coefficients.
Proximal Operator for \( L_0 \) Norm
The \( L_0 \) norm is not convex, so the proximal operator is not well-defined in the classical sense. However, approximations can be used in algorithms like the iterative hard thresholding algorithm.
5. Proximal Gradient Method#
The proximal gradient method is used for optimizing problems where the objective function can be split into a smooth part and a non-smooth part:
where \( f \) is smooth (differentiable) and \( g \) is non-smooth. The method iterates as follows:
Gradient Step: Compute the gradient of the smooth part \( f \).
\[ u^{(k+1/2)} = u^{(k)} - \eta \nabla f(u^{(k)}) \]Proximal Step: Apply the proximal operator to handle the non-smooth part \( g \).
\[ u^{(k+1)} = \text{prox}_{\eta g}(u^{(k+1/2)}) \]
This approach allows handling non-differentiable terms efficiently by separating smooth and non-smooth components.
6. Connection with Duality#
Proximal mapping also has a connection with duality theory in optimization. In many problems, solving the primal problem directly can be challenging, and the dual problem provides a different perspective. The proximal operator can be used to solve dual problems efficiently, particularly in convex optimization.
7. Algorithms Using Proximal Mapping#
Coordinate Descent: Iterates over each coordinate separately while applying the proximal operator.
Iterative Hard Thresholding (IHT): Applies proximal mapping in iterative steps for sparse recovery problems.
ADMM (Alternating Direction Method of Multipliers): Uses proximal operators to solve optimization problems with multiple constraints.
Summary#
Proximal Mapping is a versatile and powerful tool in optimization, especially for problems involving non-smooth functions or constraints. It provides a way to solve optimization problems by balancing the objective function and a regularization term through a quadratic penalty. The proximal operator helps handle non-differentiable terms efficiently, making it a fundamental concept in modern optimization algorithms.
To understand the proximal operator for the \( L_1 \) norm, \( \| \cdot \|_1 \), let’s go through the steps of deriving it. The proximal operator is a useful tool for solving optimization problems with non-differentiable functions, such as the \( L_1 \) norm.
1. Proximal Operator Definition#
The proximal operator of a function \( f \) at a point \( x \) is defined as:
where \( \lambda \) is a positive scalar and \( \| \cdot \| \) denotes the Euclidean norm.
2. Applying to the \( L_1 \) Norm#
For the \( L_1 \) norm, the function \( f(u) = \| u \|_1 \) is:
So, we need to find:
3. Minimization Problem#
We can rewrite the problem as:
This can be separated into component-wise problems because the \( L_1 \) norm is separable. For each component \( i \), we minimize:
4. Derivative with Respect to \( u_i \)#
To solve this, take the derivative of the objective function with respect to \( u_i \):
Since \( |u_i| \) is not differentiable at \( u_i = 0 \), we consider the subdifferential. For \( u_i \neq 0 \), the derivative is:
Setting this to zero gives:
Solving for \( u_i \), we get:
5. Handling Different Cases#
If \( |x_i| > \lambda \): The solution is:
\[ u_i = x_i - \lambda \text{sgn}(x_i) \]Here, \( |x_i| > \lambda \), so \( u_i \) is non-zero.
If \( |x_i| \leq \lambda \): The solution is:
\[ u_i = 0 \]Here, \( |x_i| \leq \lambda \), so \( u_i \) should be zero because the penalty is greater than or equal to the value.
6. Putting It All Together#
Combining these cases, the proximal operator for the \( L_1 \) norm is:
where:
\( \text{sgn}(x_i) \) is the sign function, giving \( +1 \) if \( x_i > 0 \), \( -1 \) if \( x_i < 0 \), and \( 0 \) if \( x_i = 0 \).
\( \max(|x_i| - \lambda, 0) \) is the soft-thresholding operation, shrinking the value of \( x_i \) by \( \lambda \) if \( |x_i| > \lambda \), and setting it to 0 otherwise.
Summary#
The proximal operator for the \( L_1 \) norm, \( \| \cdot \|_1 \), applies soft-thresholding to each component of \( x \). It reduces the magnitude of each component by \( \lambda \) if the component’s magnitude is greater than \( \lambda \), and sets it to zero otherwise. This operation is fundamental in various optimization problems, especially those involving sparse solutions like Lasso regression.
Let’s carefully work through the proximal operator calculation for the function \( g(x) = \lambda |x| \), which involves the \( L_1 \) norm. We’ll follow the example you provided, deriving the proximal operator step by step.
1. Proximal Operator Definition#
Given a function \( g: \mathbb{R} \to \mathbb{R} \) defined by \( g(x) = \lambda |x| \) with \( \lambda > 0 \), we want to find the proximal operator:
Substitute \( g(u) = \lambda |u| \):
2. Piecewise Analysis#
The function to minimize is piecewise linear, so we’ll consider two cases based on the sign of \( u \):
Case 1: \( u > 0 \)#
In this case, \( |u| = u \). So the objective function becomes:
To find the minimizer, take the derivative with respect to \( u \):
Setting the derivative to zero to find the critical points:
We need to check if \( u = x - \lambda \) is valid for \( u > 0 \). Hence, if \( x - \lambda > 0 \), the solution is:
Case 2: \( u \leq 0 \)#
In this case, \( |u| = -u \). So the objective function becomes:
Take the derivative with respect to \( u \):
Setting the derivative to zero:
We need to check if \( u = x + \lambda \) is valid for \( u \leq 0 \). Hence, if \( x + \lambda \leq 0 \), the solution is:
3. Special Case: \( u = 0 \)#
At \( u = 0 \), the objective function is:
To ensure \( u = 0 \) is the minimizer, check the conditions:
If \( |x| \leq \lambda \), then \( 0 \) lies in the range of the soft-thresholding operator, implying \( \text{prox}_{\lambda g}(x) = 0 \).
4. Combining Results#
Summarizing the results for the proximal operator:
This can be compactly written using the soft-thresholding function \( T_\lambda(x) \):
5. Interpretation#
The proximal operator \( \text{prox}_{\lambda g}(x) \) applies soft-thresholding, which:
Shrinks \( x \) by \( \lambda \) if \( x \) is greater than \( \lambda \),
Shrinks \( x \) by \( \lambda \) if \( x \) is less than \( -\lambda \),
Sets \( x \) to zero if \( |x| \leq \lambda \).
This operator is widely used in Lasso regression and other sparse optimization problems where \( L_1 \) regularization is involved. The operation is also referred to as shrinkage or soft-thresholding.
Let’s delve into the proximal operator for the zero function \( g(x) \equiv 0 \), and understand why its proximal mapping is equivalent to the identity operator.
Proximal Operator for \( g(x) \equiv 0 \)#
Definition:
Given a function \( g: \mathbb{R} \to \mathbb{R} \) defined as \( g(x) = 0 \), the proximal operator is defined as:
Substituting \( g(u) = 0 \), the problem becomes:
Minimization Problem#
Let’s solve this minimization problem step-by-step:
Objective Function:
The objective function to minimize is:
\[ \frac{1}{2} \|u - x\|^2 \]Here, \( \| \cdot \| \) denotes the Euclidean norm. For simplicity, in the one-dimensional case \( \mathbb{R} \), this reduces to:
\[ \frac{1}{2} (u - x)^2 \]Finding the Minimum:
To find the minimizer \( u \), we take the derivative of the objective function with respect to \( u \):
\[ \frac{\partial}{\partial u} \left[ \frac{1}{2} (u - x)^2 \right] = u - x \]Setting the derivative to zero to find the critical point:
\[ u - x = 0 \implies u = x \]Therefore, the minimizer \( u \) that solves this problem is:
\[ \text{prox}_{\lambda g}(x) = x \]
Interpretation#
When \( g(x) \equiv 0 \):
The function \( g \) contributes nothing to the minimization problem.
The problem simplifies to finding the \( u \) that is closest to \( x \) in terms of Euclidean distance.
The solution is simply \( x \), because the closest point to \( x \) is \( x \) itself.
General Case#
In more general settings, the proximal operator of the zero function \( g \) over any norm \( \|\cdot\| \) and any space \( E \) can be defined as:
Where \( \text{prox}_{\lambda g}(x) \) simplifies to \( x \) because the minimization is purely over the quadratic term \(\frac{1}{2} \|u - x\|^2\), which has its minimum when \( u = x \).
Conclusion#
The proximal mapping for \( g(x) \equiv 0 \) is equivalent to the identity operator. This is because:
The zero function does not add any additional constraints or penalties.
Therefore, the proximal operator simply returns the input value \( x \) as the solution to the minimization problem.
In summary, if \( g \) is the zero function, the proximal operator \( \text{prox}_{\lambda g}(x) \) is the identity operator, meaning:
Proximal Mapping#
Proximal mapping is a concept used in optimization, particularly in contexts involving regularization and sparsity. It is related to the notion of a proximal operator or proximal gradient descent, which helps in solving optimization problems where the objective function has a regularization term that is not differentiable.
Definition of Proximal Mapping#
Given a convex function \( f \) and a point \( x \), the proximal operator is defined as:
where:
\( \lambda \) is a positive scalar (regularization parameter).
\( \| \cdot \| \) denotes the Euclidean norm.
The proximal operator \( \text{prox}_{\lambda f}(x) \) provides a solution to the minimization problem that balances the term \( f(u) \) and the squared distance from \( x \).
Lasso Regression and Proximal Mapping#
Lasso (Least Absolute Shrinkage and Selection Operator) regression is a type of linear regression that includes an \( L_1 \) regularization term. This term encourages sparsity in the coefficient estimates, effectively performing feature selection.
Lasso Regression Formulation#
The objective of Lasso regression is to minimize the following cost function:
where:
\( \mathbf{X} \) is the \( n \times d \) matrix of input features.
\( \mathbf{y} \) is the \( n \times 1 \) vector of target values.
\( \boldsymbol{\beta} \) is the \( d \times 1 \) vector of coefficients.
\( \lambda \) is the regularization parameter that controls the amount of sparsity in \( \boldsymbol{\beta} \).
Solving Lasso Using Proximal Mapping#
To solve Lasso regression using proximal mapping, we can leverage the proximal operator for the \( L_1 \) norm, which is known as the soft-thresholding operator.
Proximal Operator for the \( L_1 \) Norm#
The proximal operator for the \( L_1 \) norm is given by:
where:
\( \text{sgn}(x) \) is the sign function.
\( \max(\cdot, 0) \) is the element-wise maximum function, ensuring that values below zero are set to zero.
Applying the Proximal Operator in Lasso#
Formulation: The Lasso objective function can be written as:
\[ \min_{\boldsymbol{\beta}} \left[ \frac{1}{2} \| \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \|^2 + \lambda \| \boldsymbol{\beta} \|_1 \right] \]Optimization Problem: To solve this, we use iterative algorithms like Coordinate Descent or Proximal Gradient Descent. The proximal operator helps handle the non-differentiable \( L_1 \) penalty.
Gradient Descent with Proximal Step: We use gradient descent to minimize the smooth part of the objective function \( \frac{1}{2} \| \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \|^2 \), and then apply the proximal operator to deal with the \( L_1 \) norm.
Let’s denote the smooth part of the objective function as \( f(\boldsymbol{\beta}) = \frac{1}{2} \| \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \|^2 \). The gradient of \( f(\boldsymbol{\beta}) \) with respect to \( \boldsymbol{\beta} \) is:
\[ \nabla f(\boldsymbol{\beta}) = - \mathbf{X}^T (\mathbf{y} - \mathbf{X} \boldsymbol{\beta}) \]We perform gradient descent with a step size \( \eta \), updating \( \boldsymbol{\beta} \) as follows:
\[ \boldsymbol{\beta}_{k+1} = \boldsymbol{\beta}_k - \eta \nabla f(\boldsymbol{\beta}_k) \]After each gradient descent step, we apply the proximal operator to \( \boldsymbol{\beta} \):
\[ \boldsymbol{\beta}_{k+1} = \text{prox}_{\lambda \eta} (\boldsymbol{\beta}_{k+1}) \]Here, \( \text{prox}_{\lambda \eta} \) applies element-wise soft-thresholding:
\[ \text{prox}_{\lambda \eta} (\beta_j) = \text{sgn}(\beta_j) \cdot \max(|\beta_j| - \lambda \eta, 0) \]Algorithm:
Initialize \( \boldsymbol{\beta} \).
For each iteration:
Compute the gradient \( \nabla f(\boldsymbol{\beta}) \).
Update \( \boldsymbol{\beta} \) using gradient descent.
Apply the proximal operator to \( \boldsymbol{\beta} \).
Example#
Consider a simple Lasso problem with \( \mathbf{X} \), \( \mathbf{y} \), and \( \lambda \).
Initialization: Set \( \boldsymbol{\beta}_0 \) to zeros.
Gradient Descent Step: Compute \( \nabla f(\boldsymbol{\beta}) = - \mathbf{X}^T (\mathbf{y} - \mathbf{X} \boldsymbol{\beta}) \).
Update: Perform gradient descent to get \( \boldsymbol{\beta}_{k+1} \).
Proximal Operator: Apply soft-thresholding:
\[ \boldsymbol{\beta}_{k+1, j} = \text{sgn}(\boldsymbol{\beta}_{k+1, j}) \cdot \max(|\boldsymbol{\beta}_{k+1, j}| - \lambda \eta, 0) \]Repeat: Iterate until convergence.
Summary#
Proximal Mapping helps in handling optimization problems with non-differentiable regularization terms. For Lasso regression, the proximal operator is used to solve the \( L_1 \) regularization term efficiently. By combining gradient descent with the proximal operator, we can effectively solve the Lasso regression problem and find sparse solutions for the coefficients.
import numpy as np
# Define the proximal operator for l1-norm
def proximal_l1(x, lambda_):
return np.sign(x) * np.maximum(np.abs(x) - lambda_, 0)
# Coordinate descent algorithm for Lasso
def lasso_coordinate_descent(X, y, lambda_, num_iters=1000, tol=1e-6):
n, d = X.shape
w = np.zeros(d)
b = 0
for iteration in range(num_iters):
# Update bias term b
b = np.mean(y - X.dot(w))
# Update weight vector w using coordinate descent
w_old = w.copy()
for j in range(d):
# Compute residual without j-th feature
residual = y - b - X.dot(w)
# Update w_j using the proximal operator
w[j] = proximal_l1(np.dot(X[:, j], residual) / n, lambda_ / n)
# Check for convergence
if np.linalg.norm(w - w_old, ord=2) < tol:
break
return w, b
# Example data
X = np.array([[1, 2], [2, 3], [3, 4]])
y = np.array([2, 3, 4])
lambda_ = 1
# Fit the Lasso model
w, b = lasso_coordinate_descent(X, y, lambda_)
print(f"Weight vector: {w}")
print(f"Bias term: {b}")
Weight vector: [nan nan]
Bias term: nan
E:\MainHomePage\.M_HomePage\Lib\site-packages\numpy\_core\_methods.py:127: RuntimeWarning: overflow encountered in reduce
ret = umr_sum(arr, axis, dtype, out, keepdims, where=where)
C:\Users\Dr\AppData\Local\Temp\ipykernel_11736\2102115283.py:21: RuntimeWarning: invalid value encountered in subtract
residual = y - b - X.dot(w)