# Point Estimate

A point estimate is a sample statistic used for estimating a population parameter.

**AKA:**point statistic, estimated value.**Context:**- It can be expressed as [math]\displaystyle{ \;\hat{\theta}=u(X)=u(x_1,x_2,...,x_n) }[/math], where [math]\displaystyle{ u() }[/math] is a sample statistic, [math]\displaystyle{ \hat{\theta} }[/math] represents the point estimate of the population parameter [math]\displaystyle{ \theta }[/math].
- It can (typically) be a Scalar Statistic (of a population parameter).
- It can be produced by a Point Estimation System (solving a point estimation task).
- It can range from being a Biased Point Estimate to being an Unbiased Point Estimate.
- It can range from being an Overestimation to being an Underestimation.

**Example(s):**- A sample mean value ([math]\displaystyle{ \bar{x} }[/math]) is a point estimate of the population mean value ([math]\displaystyle{ \mu }[/math]):

- [math]\displaystyle{ \hat{\mu}=\bar{x}=\frac{1}{n}\sum_{i=1}^n x_i }[/math].

- A sample variance value ([math]\displaystyle{ \bar{s^2} }[/math]) is a point estimate of the population variance value([math]\displaystyle{ \sigma^2 }[/math]):

- [math]\displaystyle{ \hat{\sigma}^2=s^2=\frac{1}{n-1}\sum_{i=1}^n [x_i -\bar{x}]^2 }[/math].

- A sample correlation coefficient ([math]\displaystyle{ r }[/math]) is a point estimate of the population correlation coefficient ([math]\displaystyle{ \rho }[/math]):

- [math]\displaystyle{ \widehat{\rho}=\frac{1}{n-1}\sum_{i=1}^n ([x_i -\bar{x}]/s_x)([y_i -\bar{y}]/s_y) }[/math].

- a Deviation Value (such as a Standard Deviation Value or an Absolute Deviation Value)
- a Maximum Likelihood Estimate.
- a Maximum a Posteriori Estimate.
- an Estimated Wage.

- …

**Counter-Example(s):**

- a Maximal Value, such as a minimum value or a maximum value.
- an Interval Estimate.

## References

### 2017a

- (Wikipedia, 2017) ⇒ https://www.wikiwand.com/en/Point_estimation
- In statistics,
**point estimation**involves the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" or "best estimate" of an unknown (fixed or random) population parameter.More formally, it is the application of a point estimator to the data.

In general, point estimation should be contrasted with interval estimation: such interval estimates are typically either confidence intervals in the case of frequentist inference, or credible intervals in the case of Bayesian inference.

- In statistics,

### 2017b

- (Wikipedia, 2017) ⇒ https://www.wikiwand.com/en/Estimator
- An "estimator" or “point estimate” is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. The parameter being estimated is sometimes called the
*estimand*. It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models).^{[1]}If the parameter is denoted [math]\displaystyle{ \theta \ }[/math]then the estimator is traditionally written by adding a circumflex over the symbol: [math]\displaystyle{ \widehat{\theta} }[/math]. Being a function of the data, the estimator is itself a random variable; a particular realization of this random variable is called the "estimate". Sometimes the words "estimator" and "estimate" are used interchangeably.The definition places virtually no restrictions on which functions of the data can be called the "estimators". The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc.. The construction and comparison of estimators are the subjects of the estimation theory. In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions.

When the word "estimator" is used without a qualifier, it usually refers to point estimation. The estimate in this case is a single point in the parameter space. There also exists an other type of estimator: interval estimators, where the estimates are subsets of the parameter space.

The problem of density estimation arises in two applications. Firstly, in estimating the probability density functions of random variables and secondly in estimating the spectral density function of a time series. In these problems the estimates are functions that can be thought of as point estimates in an infinite dimensional space, and there are corresponding interval estimation problems.

- An "estimator" or “point estimate” is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. The parameter being estimated is sometimes called the

### 2017c

- (Stat 414, 2017) ⇒Probability Theory and Mathematical Statistics, The Pennsylvania State University esson 29: Point Estimation https://onlinecourses.science.psu.edu/stat414/node/190
- We'll start the lesson with some formal definitions. In doing so, recall that we denote the [math]\displaystyle{ n }[/math] random variables arising from a random sample as subscripted uppercase letters: [math]\displaystyle{ X_1, X_2, \cdots , X_n }[/math]

- The corresponding observed values of a specific random sample are then denoted as subscripted lowercase letters: [math]\displaystyle{ x_1, x_2, \cdots, x_n }[/math]
**Definition**. The range of possible values of the parameter [math]\displaystyle{ \theta }[/math] is called the parameter space [math]\displaystyle{ \Omega }[/math] (the greek letter "omega").*For example, if [math]\displaystyle{ }[/math] denotes the mean grade point average of all college students, then the parameter space (assuming a 4-point grading scale) is: [math]\displaystyle{ \Omega = {\mu : 0 \leq \mu \leq 4} }[/math]**And, if [math]\displaystyle{ p }[/math] denotes the proportion of students who smoke cigarettes, then the parameter space is: [math]\displaystyle{ \Omega = {p: 0 \leq p \leq 1} }[/math]***Definition**. The function of [math]\displaystyle{ X_1, X_2, ..., X_n }[/math], that is, the statistic [math]\displaystyle{ u(X_1, X_2, ..., X_n) }[/math], used to estimate [math]\displaystyle{ \theta }[/math] is called a point estimator of [math]\displaystyle{ \theta }[/math].*For example, the function: [math]\displaystyle{ \bar{X}=\frac{1}{n}\sum_{i=1}^n X_i }[/math] is a point estimator of the population mean [math]\displaystyle{ \mu }[/math].*- The function: [math]\displaystyle{ \hat{p}=\frac{1}{n}\sum_{i=1}^n X_i }[/math] (where [math]\displaystyle{ X_i = 0 }[/math] or 1) is a point estimator of the population proportion [math]\displaystyle{ p }[/math].
*And, the function: [math]\displaystyle{ S^2=\frac{1}{n−1}\sum_{i=1}^n [X_i−\bar{X}]^2 }[/math] is a point estimator of the population variance [math]\displaystyle{ \omega^2 }[/math].***Definition**. The function [math]\displaystyle{ u(x_1, x_2, ..., x_n) }[/math] computed from a set of data is an observed point estimate of [math]\displaystyle{ \theta }[/math].*For example, if [math]\displaystyle{ x_i }[/math] are the observed grade point averages of a sample of 88 students, then: [math]\displaystyle{ \bar{x}=\frac{1}{88}\sum{i=1}^{88}x_i =3.12 }[/math] is a point estimate of [math]\displaystyle{ \mu }[/math], the mean grade point average of all the students in the population.**And, if [math]\displaystyle{ x_i = 0 }[/math] if a student has no tattoo, and [math]\displaystyle{ x_i = 1 }[/math] if a student has a tattoo, then: [math]\displaystyle{ \hat{p}=0.11 }[/math] is a point estimate of [math]\displaystyle{ p }[/math], the proportion of all students in the population who have a tattoo.*

### 2006

- (Dubnicka, 2006l) ⇒ Suzanne R. Dubnicka. (2006). “Point Estimation - Handout 12." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
- QUOTE: … Estimation and hypothesis testing are the two common forms of statistical inference. … In estimation, we are trying to answer the question, “What is the value of the population parameter?” An estimate is our “best guess” of the value of the population parameter and is based on the sample. Therefore, an estimate is a statistic. Two types of estimates are considered: point estimates and interval estimates. A point estimate is a single value (point) which represents our best guess of a parameter value. As our point estimate is not likely to be exactly the same value as the parameter, we often given a measure of variability associated with our point estimate. This value is called the standard error of the estimate and gives us an idea of how far off our estimate can potentially be. An interval estimate, commonly called a confidence interval, is a range of values within which we “strongly” believe the parameter value lies. A confidence interval incorporates the point estimate and standard error. … There may be more than one sensible point estimate of a parameter, depending on the criteria used.
… A point estimate X of a parameter is said to be unbiased if the expectation (mean) of X equals the value of the parameter: E(X) = . An unbiased estimator can be thought of as an accurate estimator.

- QUOTE: … Estimation and hypothesis testing are the two common forms of statistical inference. … In estimation, we are trying to answer the question, “What is the value of the population parameter?” An estimate is our “best guess” of the value of the population parameter and is based on the sample. Therefore, an estimate is a statistic. Two types of estimates are considered: point estimates and interval estimates. A point estimate is a single value (point) which represents our best guess of a parameter value. As our point estimate is not likely to be exactly the same value as the parameter, we often given a measure of variability associated with our point estimate. This value is called the standard error of the estimate and gives us an idea of how far off our estimate can potentially be. An interval estimate, commonly called a confidence interval, is a range of values within which we “strongly” believe the parameter value lies. A confidence interval incorporates the point estimate and standard error. … There may be more than one sensible point estimate of a parameter, depending on the criteria used.

### 1999

- (Hollander & Wolfe) ⇒ Myles Hollander, Douglas A. Wolfe. (1999). “Nonparametric Statistical Methods, 2nd Edition." Wiley. ISBN:0471190454
- QUOTE: An estimator is a decision rule (strategy, recipe) which, on the basis of the sample observations, estimates the value of a parameter. The specific value (on the basis of a particular set of data) which the estimator assigns is called the estimate.

- ↑ Kosorok (2008), Section 3.1, pp 35–39.