Lagrange Multipliers Optimization Algorithm

From GM-RKB
(Redirected from Lagrange multipliers)
Jump to navigation Jump to search

A Lagrange Multipliers Optimization Algorithm is a constrained mathematical optimization that uses a continuous first partial derivatives of the optimization function and the constraint function.



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Lagrange_multiplier Retrieved:2015-11-8.
    • In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange [1] ) is a strategy for finding the local maxima and minima of a function subject to equality constraints. For instance (see Figure 1), consider the optimization problem :maximize f(x, y) :subject to g(x, y) 0. We need both and to have continuous first partial derivatives. We introduce a new variable () called a Lagrange multiplier and study the Lagrange function (or Lagrangian) defined by : [math]\displaystyle{ \mathcal{L}(x,y,\lambda) = f(x,y) + \lambda \cdot g(x,y), }[/math] where the term may be either added or subtracted. If f(x0, y0) is a maximum of f(x, y) for the original constrained problem, then there exists λ0 such that (x0, y0, λ0) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of [math]\displaystyle{ \mathcal{L} }[/math] are zero). However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems. [2] Sufficient conditions for a minimum or maximum also exist.
  1. Mécanique Analytique sect. IV, 2 vols. Paris, 1811 https://archive.org/details/mcaniqueanalyt01lagr
  2. * *