NOT COMPLETE, WORKING ON ADDITIONAL OPTIMIZATION OPTIONS
Optimization Theory
Colossus offers three levels of score calculation, calculating only the score, calculating the score and first derivative, and calculating the score and both first and second derivatives. The second and third options correspond to the Gradient Descent and Newton-Raphson optimization approaches. The goal of this vignette is to discuss how these methods are different, and in what circumstances each might be most appropriate. In both cases the algorithm is designed to iteratively change the parameter estimates to approach a set of parameter values which optimize the score. The major difference is how much information is being calculated and used. The Newton-Raphson algorithm calculates the second derivative matrix, inverts it, and solves a linear system of equations to set the first derivative vector to zero. This method establishes both a magnitude and direction for every step. So every step has several time-intensive calculations, but the new parameter estimates are informed. In this algorithm Colossus uses both a learning rate () and maximum allowable parameter change ().
The alternative is a Gradient descent approach. In this algorithm, the first derivatives are calculated and used to determine the vector with highest change in score. This establishes a direction for the change in parameters, which is multiplied by the learning rate (). Similar to the Newton-Raphson algorithm, the magnitude is normalized to the maximum allowable parameter change (). Colossus uses half-steps to slowly reduce the allowable step size as the solution approaches the optimum. The Gradient algorithm avoids the time-intensive second-derivative calculations, but takes less informed steps. So each iteration runs faster, but more iterations may be required.
The standard half-step framework is not likely to be sufficient for the Gradient descent algorithm. Because of this, several different optimization options have been or will be added, like momentum, adadelta, and adam, which use previous information about the gradient to inform the step size for future steps. The first method, momentum, applies a weighted sum () of the current and previous step. This is done to speed up steps moving toward the optimum position and correct for when the algorithm oversteps. This can avoid the issue of oscillation around an optimum value.
The next method, the adadelta method, applies a parameter specific learning rate by tracking the root mean square (RMS) gradient and parameter updates within a window. Instead of tracking a true window of iteration, the old estimate of RMS is decayed by a weight () before being added to the new estimate. The ratio of RMS parameter update to RMS gradient is used to normalize the results back in the correct units. A small offset () is used to avoid the case of division by zero.
The final method, adam, combines the theory behind the momentum and adadelta methods. The adam method tracks an estimate of the first moment vector () and second moment vector (), which are weighted by decay parameters (). These are bias corrected to correct for bias in early iterations (). The learning rate () and second moment vector provide the decaying learning rate from adadelta, and the first moment vector provides an effect similar to momentum. Combined these have generally been able to stabilize gradient descent algorithms without incurring a significant computational cost.