# Estimator Bias

An Estimator Bias is a quantification of the difference between a Predictive Estimator's Predicted Value and the True Value.

**AKA:**Bias, Bias of an Estimator, Bias Function.**Context**- Itcan be defined as [math] Bias(\hat{\theta}, \theta) = E(\hat{\theta})-\theta=E(\hat{\theta}-\theta)[/math], where [math]\hat{\theta}[/math] is the estimator of the parameter [math]\theta[/math], and E is the Expected Value Function.
- …

**Example(s):****Counter-Example(s)****See:**Risk, Sample Standard Deviation, Statistics, Estimator, Expected Value, Median, Consistent Estimator , Unbiased Estimation of Standard Deviation, Loss Function, Mean Squared Error, Shrinkage Estimator.

## References

### 2018

- (TF-ML GLossary, 2018) ⇒ (2008). bias. In: Machine Learning Glossary (TensorFlow) Retrieved: 2018-05-13.
- QUOTE: An intercept or offset from an origin. Bias (also known as the
**bias term****) is referred to as [math]b[/math] or [math]w_0[/math] in machine learning models. For example, bias is the [math]b[/math] in the following formula:**[math]y'=b+w_1x_1+w_2x_2+\cdots+w_nx_n[/math]

**Not to be confused with prediction bias.**

- QUOTE: An intercept or offset from an origin. Bias (also known as the

### 2017

- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Bias_of_an_estimator Retrieved:2017-7-16.
- In statistics, the
**bias**(or**bias function**) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. An estimator or decision rule with zero bias is called**unbiased**. Otherwise the estimator is said to be**biased**. In statistics, "bias" is an objective property of an estimator, and while not a desired property, it is not pejorative, unlike the ordinary English use of the term “bias”.Bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes

*median*-unbiased from the usual*mean*-unbiasedness property. Bias is related to consistency in that consistent estimators are convergent and*asymptotically*unbiased (hence converge to the correct value), though individual estimators in a consistent sequence may be biased (so long as the bias converges to zero); see bias versus consistency.All else being equal, an unbiased estimator is preferable to a biased estimator, but in practice all else is not equal, and biased estimators are frequently used, generally with small bias. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population or is difficult to compute (as in unbiased estimation of standard deviation); because an estimator is median-unbiased but not mean-unbiased (or the reverse); because a biased estimator reduces some loss function (particularly mean squared error) compared with unbiased estimators (notably in shrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful. Further, mean-unbiasedness is not preserved under non-linear transformations, though median-unbiasedness is (see effect of transformations); for example, the sample variance is an unbiased estimator for the population variance, but its square root, the sample standard deviation, is a biased estimator for the population standard deviation. These are all illustrated below.

- In statistics, the

**
**