Limited-Memory Quasi-Newton Algorithm

(Redirected from L-BFGS)
Jump to: navigation, search

A Limited-Memory Quasi-Newton Algorithm is a quasi-Newton algorithm that uses the Broyden–Fletcher–Goldfarb–Shanno Update to approximate the Hessian matrix.



    • QUOTE: The limited-memory BFGS (L-BFGS or LM-BFGS) algorithm is a member of the broad family of quasi-Newton optimization methods that uses a limited memory variation of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update to approximate the inverse Hessian matrix (denoted by [math]H_k[/math]). Unlike the original BFGS method which stores a dense [math] n \times n [/math] approximation, L-BFGS stores only a few vectors that represent the approximation implicitly. Due to its moderate memory requirement, L-BFGS method is particularly well suited for optimization problems with a large number of variables. L-BFGS never explicitly forms or stores [math]H_k[/math]. Instead, it maintains a history of the past [math]m\,\![/math] updates of the position [math]x\,\![/math] and gradient [math]\nabla f(x)[/math], where generally the history [math]m\,\![/math] can be short, often less than 10. These updates are used to implicitly do operations requiring the [math]H_k[/math]-vector product. While strictly, a straightforward BFGS implementation at the [math]i\,\![/math]-th iteration would represent the inverse Hessian approximation as informed by all updates on [math]0 \ldots i-1[/math], L-BFGS does quite well using updates from only the most recent iterations [math]i-m \ldots i-1[/math].