Gauss-Newton Optimization Algorithm

From GM-RKB
Jump to navigation Jump to search

A Gauss-Newton Optimization Algorithm is an optimization algorithm that ...



References

2018a

2018b

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm#Related_algorithms Retrieved:2018-8-19.
    • In a quasi-Newton method, such as that due to Davidon, Fletcher and Powell or Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian [math]\displaystyle{ \frac{\partial^2 S}{\partial \beta_j \partial\beta_k} }[/math] is built up numerically using first derivatives [math]\displaystyle{ \frac{\partial r_i}{\partial\beta_j} }[/math] only so that after n refinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.

      Another method for solving minimization problems using only first derivatives is gradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.