1994 TrainingFeedforwardNetworkswith

From GM-RKB
Jump to navigation Jump to search

Subject Headings:

Notes

Cited By

Quotes

Abstract

The Marquardt algorithm for nonlinear least squares is presented and is incorporated into the backpropagation algorithm for training feedforward neural networks. The algorithm is tested on several function approximation problems, and is compared with a conjugate gradient algorithm and a variable learning rate algorithm. It is found that the Marquardt algorithm is much more efficient than either of the other techniques when the network contains no more than a few hundred weights

III. MARQUARDT-LEVENBMEROGD MODIFICATION

While backpropagation is a steepest descent algorithm, the Marquardt-Levenberg algorithm [141 is an approximation to Newton's method. Suppose that we have a function V(:) which we want to minimize with respect to the parameter vector I, then Newton's method would be A: = -[V2V(~)]-'VV(:) (16) where [math]\displaystyle{ \triangledown V(\underline{x}) }[/math] is the Hessian matrix and [math]\displaystyle{ :V(\underline{x}) }[/math] is the gradient. If we assume that V(:) is a sum of squares function

[math]\displaystyle{ V(\underline{x}) = \sum^N_{i=q} e^2_i(\underline{x}) \ \ (17) }[/math]

References

;

 AuthorvolumeDate ValuetitletypejournaltitleUrldoinoteyear
1994 TrainingFeedforwardNetworkswithMartin T. Hagan
Mohammad B. Menhaj
Training Feedforward Networks with the Marquardt Algorithm10.1109/72.3296971994