Bayesian Optimization Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
No edit summary
(ContinuousReplacement)
Tag: continuous replacement
Line 3: Line 3:
----
----
----
----
== References ==
== References ==


Line 14: Line 15:


=== 2019 ===
=== 2019 ===
* (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization Retrieved:2019-9-12.
* (Wikipedia, 2019) https://en.wikipedia.org/wiki/Bayesian_optimization Retrieved:2019-9-12.
** '''Bayesian optimization''' is a [[sequential analysis|sequential design]] strategy <P> for [[global optimization]] of [[Black box|black-box]] functions <ref> Jonas Mockus (2012). [https://books.google.com/books?id=VuKoCAAAQBAJ&printsec=frontcover#v=onepage&q=%22global%20optimization%22&f=false Bayesian approach to global optimization: theory and applications]. Kluwer Academic. </ref> that [[Derivative-free optimization|doesn't require derivatives]].
** '''Bayesian optimization''' is a [[sequential analysis|sequential design]] strategy <P> for [[global optimization]] of [[Black box|black-box]] functions <ref> Jonas Mockus (2012). [https://books.google.com/books?id=VuKoCAAAQBAJ&printsec=frontcover#v=onepage&q=%22global%20optimization%22&f=false Bayesian approach to global optimization: theory and applications]. Kluwer Academic. </ref> that [[Derivative-free optimization|doesn't require derivatives]].


=== 2019 ===
=== 2019 ===
* (Wikipedia, 2019) &rArr; https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12.
* (Wikipedia, 2019) https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12.
** Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a [[Prior distribution|prior]] over it. <P> The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the [[posterior distribution]] over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.
** Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a [[Prior distribution|prior]] over it. <P> The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the [[posterior distribution]] over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.

Revision as of 01:24, 12 September 2019

A Bayesian Optimization Algorithm is an optimization algorithm for expensive cost functions.



References

2010


2019

2019

  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Bayesian_optimization#Strategy Retrieved:2019-9-12.
    • Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it.

      The prior captures beliefs about the behaviour of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point.