Bayesian Optimization Algorithm: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
(ContinuousReplacement) Tag: continuous replacement |
||
Line 3: | Line 3: | ||
---- | ---- | ||
---- | ---- | ||
== References == | == References == | ||
Line 14: | Line 15: | ||
=== 2019 === | === 2019 === | ||
* (Wikipedia, 2019) | * (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization Retrieved:2019-9-12. | ||
** '''Bayesian optimization''' is a [[sequential analysis|sequential design]] strategy <P> for [[global optimization]] of [[Black box|black-box]] functions <ref> Jonas Mockus (2012). [https://books.google.com/books?id=VuKoCAAAQBAJ&printsec=frontcover#v=onepage&q=%22global%20optimization%22&f=false Bayesian approach to global optimization: theory and applications]. Kluwer Academic. </ref> that [[Derivative-free optimization|doesn't require derivatives]]. | ** '''Bayesian optimization''' is a [[sequential analysis|sequential design]] strategy <P> for [[global optimization]] of [[Black box|black-box]] functions <ref> Jonas Mockus (2012). [https://books.google.com/books?id=VuKoCAAAQBAJ&printsec=frontcover#v=onepage&q=%22global%20optimization%22&f=false Bayesian approach to global optimization: theory and applications]. Kluwer Academic. </ref> that [[Derivative-free optimization|doesn't require derivatives]]. | ||
=== 2019 === | === 2019 === | ||
* (Wikipedia, 2019) | * (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12. | ||
** Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a [[Prior distribution|prior]] over it. <P> The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the [[posterior distribution]] over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. | ** Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a [[Prior distribution|prior]] over it. <P> The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the [[posterior distribution]] over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. |
Revision as of 01:24, 12 September 2019
A Bayesian Optimization Algorithm is an optimization algorithm for expensive cost functions.
References
2010
- (Brochu et al., 2010) ⇒ Eric Brochu, Vlad M. Cora, and Nando De Freitas. (2010). “A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning." arXiv preprint arXiv:1012.2599
- ABSTRACT: We present a tutorial on Bayesian optimization, a method of finding the maximum of expensive cost functions. Bayesian optimization employs the Bayesian technique of setting a prior over the objective function and combining it with evidence to get a posterior function. This permits a utility-based selection of the next observation to make on the objective function, which must take into account both exploration (sampling from areas of high uncertainty) and exploitation (sampling areas likely to offer improvement over the current best observation). We also present two detailed extensions of Bayesian optimization, with experiments --- active user modelling with preferences, and hierarchical reinforcement learning --- and a discussion of the pros and cons of Bayesian optimization based on our experiences.
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization Retrieved:2019-9-12.
- Bayesian optimization is a sequential design strategy
for global optimization of black-box functions [1] that doesn't require derivatives.
- Bayesian optimization is a sequential design strategy
2019
- (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12.
- Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it.
The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.
- Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it.
- ↑ Jonas Mockus (2012). Bayesian approach to global optimization: theory and applications. Kluwer Academic.