Point Estimate

From GM-RKB
Jump to navigation Jump to search

A point estimate is a sample statistic used for estimating a population parameter.

[math]\displaystyle{ \hat{\mu}=\bar{x}=\frac{1}{n}\sum_{i=1}^n x_i }[/math].
[math]\displaystyle{ \hat{\sigma}^2=s^2=\frac{1}{n-1}\sum_{i=1}^n [x_i -\bar{x}]^2 }[/math].
[math]\displaystyle{ \widehat{\rho}=\frac{1}{n-1}\sum_{i=1}^n ([x_i -\bar{x}]/s_x)([y_i -\bar{y}]/s_y) }[/math].
  • Counter-Example(s):


References

2017a

2017b

  • (Wikipedia, 2017) ⇒ https://www.wikiwand.com/en/Estimator
    • An "estimator" or “point estimate” is a statistic (that is, a function of the data) that is used to infer the value of an unknown parameter in a statistical model. The parameter being estimated is sometimes called the estimand. It can be either finite-dimensional (in parametric and semi-parametric models), or infinite-dimensional (semi-parametric and non-parametric models).[1] If the parameter is denoted [math]\displaystyle{ \theta \ }[/math]then the estimator is traditionally written by adding a circumflex over the symbol: [math]\displaystyle{ \widehat{\theta} }[/math]. Being a function of the data, the estimator is itself a random variable; a particular realization of this random variable is called the "estimate". Sometimes the words "estimator" and "estimate" are used interchangeably.

      The definition places virtually no restrictions on which functions of the data can be called the "estimators". The attractiveness of different estimators can be judged by looking at their properties, such as unbiasedness, mean square error, consistency, asymptotic distribution, etc.. The construction and comparison of estimators are the subjects of the estimation theory. In the context of decision theory, an estimator is a type of decision rule, and its performance may be evaluated through the use of loss functions.

      When the word "estimator" is used without a qualifier, it usually refers to point estimation. The estimate in this case is a single point in the parameter space. There also exists an other type of estimator: interval estimators, where the estimates are subsets of the parameter space.

      The problem of density estimation arises in two applications. Firstly, in estimating the probability density functions of random variables and secondly in estimating the spectral density function of a time series. In these problems the estimates are functions that can be thought of as point estimates in an infinite dimensional space, and there are corresponding interval estimation problems.

2017c

  • (Stat 414, 2017) ⇒Probability Theory and Mathematical Statistics, The Pennsylvania State University esson 29: Point Estimation https://onlinecourses.science.psu.edu/stat414/node/190
    • We'll start the lesson with some formal definitions. In doing so, recall that we denote the [math]\displaystyle{ n }[/math] random variables arising from a random sample as subscripted uppercase letters: [math]\displaystyle{ X_1, X_2, \cdots , X_n }[/math]
The corresponding observed values of a specific random sample are then denoted as subscripted lowercase letters: [math]\displaystyle{ x_1, x_2, \cdots, x_n }[/math]
Definition. The range of possible values of the parameter [math]\displaystyle{ \theta }[/math] is called the parameter space [math]\displaystyle{ \Omega }[/math] (the greek letter "omega").
For example, if [math]\displaystyle{ }[/math] denotes the mean grade point average of all college students, then the parameter space (assuming a 4-point grading scale) is: [math]\displaystyle{ \Omega = {\mu : 0 \leq \mu \leq 4} }[/math]
And, if [math]\displaystyle{ p }[/math] denotes the proportion of students who smoke cigarettes, then the parameter space is: [math]\displaystyle{ \Omega = {p: 0 \leq p \leq 1} }[/math]
Definition. The function of [math]\displaystyle{ X_1, X_2, ..., X_n }[/math], that is, the statistic [math]\displaystyle{ u(X_1, X_2, ..., X_n) }[/math], used to estimate [math]\displaystyle{ \theta }[/math] is called a point estimator of [math]\displaystyle{ \theta }[/math].
For example, the function: [math]\displaystyle{ \bar{X}=\frac{1}{n}\sum_{i=1}^n X_i }[/math] is a point estimator of the population mean [math]\displaystyle{ \mu }[/math].
The function: [math]\displaystyle{ \hat{p}=\frac{1}{n}\sum_{i=1}^n X_i }[/math] (where [math]\displaystyle{ X_i = 0 }[/math] or 1) is a point estimator of the population proportion [math]\displaystyle{ p }[/math].
And, the function: [math]\displaystyle{ S^2=\frac{1}{n−1}\sum_{i=1}^n [X_i−\bar{X}]^2 }[/math] is a point estimator of the population variance [math]\displaystyle{ \omega^2 }[/math].
Definition. The function [math]\displaystyle{ u(x_1, x_2, ..., x_n) }[/math] computed from a set of data is an observed point estimate of [math]\displaystyle{ \theta }[/math].
For example, if [math]\displaystyle{ x_i }[/math] are the observed grade point averages of a sample of 88 students, then: [math]\displaystyle{ \bar{x}=\frac{1}{88}\sum{i=1}^{88}x_i =3.12 }[/math] is a point estimate of [math]\displaystyle{ \mu }[/math], the mean grade point average of all the students in the population.
And, if [math]\displaystyle{ x_i = 0 }[/math] if a student has no tattoo, and [math]\displaystyle{ x_i = 1 }[/math] if a student has a tattoo, then: [math]\displaystyle{ \hat{p}=0.11 }[/math] is a point estimate of [math]\displaystyle{ p }[/math], the proportion of all students in the population who have a tattoo.

2006

1999


  1. Kosorok (2008), Section 3.1, pp 35–39.