Limited-Memory Quasi-Newton Algorithm

From GM-RKB
(Redirected from L-BFGS)
Jump to navigation Jump to search

A Limited-Memory Quasi-Newton Algorithm is a quasi-Newton algorithm that uses the Broyden–Fletcher–Goldfarb–Shanno Update to approximate the Hessian matrix.



References

2012

  • http://en.wikipedia.org/wiki/L-BFGS
    • QUOTE: The limited-memory BFGS (L-BFGS or LM-BFGS) algorithm is a member of the broad family of quasi-Newton optimization methods that uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update to approximate the inverse Hessian matrix (denoted by [math]\displaystyle{ H_k }[/math]). Unlike the original BFGS method which stores a dense [math]\displaystyle{ n \times n }[/math] approximation, L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its moderate memory requirement, L-BFGS method is particularly well suited for optimization problems with a large number of variables. L-BFGS never explicitly forms or stores [math]\displaystyle{ H_k }[/math]. Instead, it maintains a history of the past [math]\displaystyle{ m\,\! }[/math] updates of the position [math]\displaystyle{ x\,\! }[/math] and gradient [math]\displaystyle{ \nabla f(x) }[/math], where generally the history [math]\displaystyle{ m\,\! }[/math] can be short, often less than 10. These updates are used to implicitly do operations requiring the [math]\displaystyle{ H_k }[/math]-vector product. While strictly, a straightforward BFGS implementation at the [math]\displaystyle{ i\,\! }[/math]-th iteration would represent the inverse Hessian approximation as informed by all updates on [math]\displaystyle{ 0 \ldots i-1 }[/math], L-BFGS does quite well using updates from only the most recent iterations [math]\displaystyle{ i-m \ldots i-1 }[/math].

2006


1999

1989