Conjugate Gradient Optimization Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
 
No edit summary
Line 10: Line 10:
** QUOTE: In [[mathematics]], the '''conjugate gradient method''' is an [[algorithm]] for the [[numerical solution]] of particular [[system of linear equations|systems of linear equations]], namely those whose matrix is [[symmetric matrix|symmetric]] and [[positive-definite matrix|positive-definite]]. The conjugate gradient method is an [[iterative method]], so it can be applied to [[sparse matrix|sparse]] systems that are too large to be handled by direct methods such as the [[Cholesky decomposition]]. Such systems often arise when numerically solving [[partial differential equation]]s.  <P>  The conjugate gradient method can also be used to solve unconstrained [[Mathematical optimization|optimization]] problems such as [[energy minimization]].  It was developed by [[Magnus Hestenes]] and [[Eduard Stiefel]].<ref>{{cite web|last=Straeter|first=T. A.|title=On the Extension of the Davidon-Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems|url=http://hdl.handle.net/2060/19710026200|work=NASA Technical Reports Server|publisher=NASA|accessdate=10 October 2011}}</ref>  <P> The [[biconjugate gradient method]] provides a generalization to non-symmetric matrices. Various [[nonlinear conjugate gradient method]]s seek minima of nonlinear equations.
** QUOTE: In [[mathematics]], the '''conjugate gradient method''' is an [[algorithm]] for the [[numerical solution]] of particular [[system of linear equations|systems of linear equations]], namely those whose matrix is [[symmetric matrix|symmetric]] and [[positive-definite matrix|positive-definite]]. The conjugate gradient method is an [[iterative method]], so it can be applied to [[sparse matrix|sparse]] systems that are too large to be handled by direct methods such as the [[Cholesky decomposition]]. Such systems often arise when numerically solving [[partial differential equation]]s.  <P>  The conjugate gradient method can also be used to solve unconstrained [[Mathematical optimization|optimization]] problems such as [[energy minimization]].  It was developed by [[Magnus Hestenes]] and [[Eduard Stiefel]].<ref>{{cite web|last=Straeter|first=T. A.|title=On the Extension of the Davidon-Broyden Class of Rank One, Quasi-Newton Minimization Methods to an Infinite Dimensional Hilbert Space with Applications to Optimal Control Problems|url=http://hdl.handle.net/2060/19710026200|work=NASA Technical Reports Server|publisher=NASA|accessdate=10 October 2011}}</ref>  <P> The [[biconjugate gradient method]] provides a generalization to non-symmetric matrices. Various [[nonlinear conjugate gradient method]]s seek minima of nonlinear equations.
<references/>
<references/>
===1994===
* ([[1994_TrainingFeedforwardNetworkswith|Hagan & Menhaj, 1994]]) &rArr; [[Martin T. Hagan]], and [[Mohammad B. Menhaj]]. ([[1994]]). "[http://www.das.ufsc.br/~marcelo/pg-ic/Marquardt%20algorithm%20for%20MLP.pdf Training Feedforward Networks with the Marquardt Algorithm]." In: IEEE Transactions on Neural Networks Journal, 5(6). [http://dx.doi.org/10.1109/72.329697 doi:10.1109/72.329697]
** QUOTE: The [[Marquardt algorithm]] for [[nonlinear least square]]s is presented and is incorporated into the [[backpropagation algorithm]] for [[training feedforward neural networks]]. </s> The [[algorithm]] is tested on several [[function approximation problem]]s, and is compared with a [[conjugate gradient algorithm]] and a [[variable learning rate algorithm]]. </s>


----
----
__NOTOC__
__NOTOC__
[[Category:Concept]]
[[Category:Concept]]

Revision as of 16:00, 28 June 2014

A Conjugate Gradient Optimization Algorithm is a batch function optimization algorithm that ...



References

2012

1994