Conjugate Gradient Method Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
 
Line 1: Line 1:
A [[Conjugate Gradient Method Algorithm]] is a [[numerical analysis algorithm]] that uses a [[function gradient]].
#REDIRECT [[Conjugate Gradient-Descent Algorithm]]
* <B>Context:</B>
** It can (often) make use of a [[Symmetric Positive-Definite Matrix]].
* <B>Counter-Example(s):</B>
** [[Method of Gradient Descent]].
* <B>See:</B> [[System of Linear Equations]], [[Symmetric Matrix]], [[Positive-Definite Matrix]], [[Iterative Method]], [[Sparse Matrix]], [[Cholesky Decomposition]], [[Partial Differential Equation]], [[Mathematical Optimization]].
----
----
==References==
 
=== 2015 ===
* (Wikipedia, 2015) &rArr; http://en.wikipedia.org/wiki/conjugate_gradient_method Retrieved:2015-1-19.
** In [[mathematics]], the '''conjugate gradient method''' is an [[algorithm]] for the [[numerical solution]] of particular [[system of linear equations|systems of linear equations]], namely those whose matrix is [[symmetric matrix|symmetric]] and [[positive-definite matrix|positive-definite]]. The conjugate gradient method is often implemented as an [[iterative method|iterative algorithm]], applicable to [[sparse matrix|sparse]] systems that are too large to be handled by a direct implementation or other direct methods such as the [[Cholesky decomposition]]. Large sparse systems often arise when numerically solving [[partial differential equation]]s or optimization problems. <P> The conjugate gradient method can also be used to solve unconstrained [[Mathematical optimization|optimization]] problems such as [[energy minimization]]. It was mainly developed by [[Magnus Hestenes]] and [[Eduard Stiefel]].  <P> The [[biconjugate gradient method]] provides a generalization to non-symmetric matrices. Various [[nonlinear conjugate gradient method]]s seek minima of nonlinear equations.
 
=== 2014===
* Black, Noel; Moore, Shirley; and Weisstein, Eric W. "[http://mathworld.wolfram.com/ConjugateGradientMethod.html Conjugate Gradient Method]." From MathWorld  Retrieved:2014-5-12.
** QUOTE: The [[conjugate gradient method]] is an [[algorithm for finding the nearest local minimum of a function of n variables]] which presupposes that the [[gradient of the function]] [[can be computed]]. It uses [[conjugate direction]]s instead of the [[local gradient]] for [[going downhill]]. If the vicinity of the minimum has the shape of a long, narrow valley, the minimum is reached in far fewer steps than would be the case using the [[method of steepest descent]]. <P> For a discussion of the [[conjugate gradient method]] on [[vector computer|vector]] and [[shared memory computer]]s, see [[Dongarra et al. (1991)]]. For discussions of the method for more general parallel architectures, see [[Demmel et al. (1993)]] and [[Ortega (1988)]] and the references therein.
 
----
[[Category:Concept]]
__NOTOC__
 
=== 2015 ===
* (Wikipedia, 2015) &rArr; http://en.wikipedia.org/wiki/conjugate_gradient_method Retrieved:2015-6-24.
** In [[mathematics]], the '''conjugate gradient method''' is an [[algorithm]] for the [[numerical solution]] of particular [[system of linear equations|systems of linear equations]], namely those whose matrix is [[symmetric matrix|symmetric]] and [[positive-definite matrix|positive-definite]]. The conjugate gradient method is often implemented as an [[iterative method|iterative algorithm]], applicable to [[sparse matrix|sparse]] systems that are too large to be handled by a direct implementation or other direct methods such as the [[Cholesky decomposition]]. Large sparse systems often arise when numerically solving [[partial differential equation]]s or optimization problems. <P> The conjugate gradient method can also be used to solve unconstrained [[Mathematical optimization|optimization]] problems such as [[energy minimization]]. It was mainly developed by [[Magnus Hestenes]] and [[Eduard Stiefel]].  <P> The [[biconjugate gradient method]] provides a generalization to non-symmetric matrices. Various [[nonlinear conjugate gradient method]]s seek minima of nonlinear equations. <P> <!-- <P> {|class="infobox bordered" style="width: 22em; text-align: left; font-size: 95%;" <P> |colspan="2" style="text-align:center;" | <br /><br />'''Conjugate gradient search over Rosenbrock banana function'''<br /> <P> |- <P> |}-->

Latest revision as of 01:57, 25 June 2015