Bayes Estimator

From GM-RKB
(Redirected from Bayesian Estimator)
Jump to navigation Jump to search

A Bayes Estimator is a point estimator based on the posterior distribution that minimizes the posterior expected loss (or maximizes the posterior expectation or a utility function).



References

2011

  • http://en.wikipedia.org/wiki/Bayes_estimator
    • In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expected loss). Equivalently, it maximizes the posterior expectation of a utility function. An alternative way of formulating an estimator within Bayesian statistics is Maximum a posteriori estimation.

      Suppose an unknown parameter θ is known to have a prior distribution [math]\displaystyle{ \pi }[/math]. Let [math]\displaystyle{ \delta = \delta(x) }[/math] be an estimator of θ (based on some measurements x), and let [math]\displaystyle{ L(\theta,\delta) }[/math] be a loss function, such as squared error. The Bayes risk of [math]\displaystyle{ \delta }[/math] is defined as [math]\displaystyle{ E_\pi \{ L(\theta, \delta) \} }[/math], where the expectation is taken over the probability distribution of [math]\displaystyle{ \theta }[/math]: this defines the risk function as a function of [math]\displaystyle{ \delta }[/math]. An estimator [math]\displaystyle{ \delta }[/math] is said to be a Bayes estimator if it minimizes the Bayes risk among all estimators. Equivalently, the estimator which minimizes the posterior expected loss [math]\displaystyle{ E \{ L(\theta,\delta) | x \} }[/math] for each x also minimizes the Bayes risk and therefore is a Bayes estimator.[1]

      If the prior is improper then an estimator which minimizes the posterior expected loss for each x is called a generalized Bayes estimator.[2]

  1. Lehmann and Casella, Theorem 4.1.1
  2. Lehmann and Casella, Definition 4.2.9