Augmented Lagrangian Algorithm

From GM-RKB
Jump to navigation Jump to search

An Augmented Lagrangian Algorithm is a Lagrangian optimization algorithm that replaces constrained optimization problems by unconstrained optimization problems with a term added to the objective function.



References

2016

2015

  1. M.R. Hestenes, "Multiplier and gradient methods", Journal of Optimization Theory and Applications, 4, 1969, pp. 303–320
  2. M.J.D. Powell, "A method for nonlinear constraints in minimization problems", in Optimization ed. by R. Fletcher, Academic Press, New York, NY, 1969, pp. 283–298.
  3. Dimitri P. Bertsekas, Constrained optimization and Lagrange multiplier methods, Athena Scientific, 1996 (first published 1982)
  4. , chapter 17


[math]\displaystyle{ \begin{align} g_L(x,y,\rho) &= g_0(x) - J(x)^T \hat{y}, \label{eqn-AL1} \\ H_L(x,y,\rho) &= H_0(x) - {\textstyle\sum} \hat{y}_i H_i(x) + \rho J(x)^T J(x), \label{eqn-AL2} \\ \hat{y} &\equiv y - \rho c(x). \label{eqn-AL3} \end{align} }[/math]

    • The augmented Lagrangian method for solving problem NEC proceeds by choosing [math]\displaystyle{ y }[/math] and [math]\displaystyle{ \rho }[/math] judiciously and then minimizing $L(x,y,\rho)$ as a function of [math]\displaystyle{ x }[/math]. The resulting [math]\displaystyle{ x }[/math] is used to choose a new [math]\displaystyle{ y }[/math] and [math]\displaystyle{ \rho }[/math], and the process repeats. The auxiliary vector $\yhat$ simplifies the above notation and proves to be useful in its own right.

2000

1991