Optimization Algorithms for Gradient Descent


title: Optimization Algorithms for Gradient Descent

Optimization Algorithms for Gradient Descent

Gradient Descent is one of the most popular and wide used (one can argue that it’s de facto) optimization algorithm in machine learning.
Gradient descent algorithm in the view of machine learning tries to minimize the objective function (more commonly known as loss function) J(θ) with model parameters by updating the parameters in the opposite direction of the gradient of the objective function. The learning rate η determines the size of the steps we take to reach a (local) minimum

Challenges with Gradient Descent

  • Determining the appropriate learning rate parameter may be difficult
  • Too high – The algorithm might not converge to any state value (of loss/model parameter weights)
  • Too low – The algorithm might take too long to converge
  • The learning scheme doesn’t allow for dynamic learning
  • The learning rate cannot be adjusted during training based on change in objective function. Methods to do so require a predefined set of learning rate values to be used at each step.
  • The learning rate is static across all parameters, i.e, the update for all parameters takes the same jump step. It is particularly disappointing because the model parameters different frequency and range and hence we would want the update to happen at dfferent speeds for different parameters.

Some off-the shelf remedies

Momentum

The momentum algorithm specifically better address the static learning rate problem of Gradient Descent. Specifically it helps accelerate GD in the relevant direction and dampens oscillations.

Momentum graphic
Source: WordPress

Momentum works by adding a fraction of γ of the update vector of the past time step to the current update

Mom_eq

Mom_update

The momentum term γ is usually set to 0.9. The momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.

RMSprop

RMSprop is an unpublished, adaptive learning rate method proposed by Geoff Hinton in Lecture 6e of his Coursera Class. RMSprop divides the learning rate for a weight by a running average of the magnitudes of recent gradients for that weight.

RMSprop_Gj

RMSprop_wj

The parameter ϵ is a very small threshold to avoid division by 0 error and is usually 10-8.

Adam

Adaptive Moment Estimation (Adam). It computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients vt RMSprop, Adam also keeps an exponentially decaying average of past gradients
mt, similar to momentum. Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface. We compute the decaying averages of past and past squared gradients
mt and vt respectively as follows:

Mt

Vt

mt and vt are estimates of the first moment (the mean) and the second moment (the uncentered variance) of the gradients respectively, hence the name of the method. As mt and vt are initialized as vectors of 0’s, the authors of Adam observe that they are biased towards zero, especially during the initial time steps, and especially when the decay rates are small (i.e. β1 and β2 are close to 1).These biases are counteracted by computing bias-corrected first and second moment estimates.

Mt_hat

vt_hat

The Adam update rule is as follows:

THeta_t+1

The parameter ϵ is a very small threshold to avoid division by 0 error. The algorithms usually takes the default values of 0.9 for
β1, 0.999 for β2, and 10−8 for ϵ.

Other popular algorithms

  • Adagrad
  • Nesterov Accelerated Gradient
  • Adadelta
  • AdaMax
  • AMSGrad
  • Nadam

More Information:

This article needs improvement. You can help improve this article. You can also write similar articles and help the community.