Statistical Estimation Function

From GM-RKB
(Redirected from Estimator)
Jump to navigation Jump to search

A Statistical Estimation Function is a predictive function that produces an estimated value for an estimand.



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/estimator Retrieved:2015-6-28.
    • In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished.

      There are point and interval estimators. The point estimators yield single-valued results, although this includes the possibility of single vector-valued results and results that can be expressed as a single function. This is in contrast to an interval estimator, where the result would be a range of plausible values (or vectors or functions).

       Estimation theory is concerned with the properties of estimators; that is, with defining properties that can be used to compare different estimators (different rules for creating estimates) for the same quantity, based on the same data. Such properties can be used to determine the best rules to use under given circumstances. However, in robust statistics, statistical theory goes on to consider the balance between having good properties, if tightly defined assumptions hold, and having less good properties that hold under wider conditions.

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/estimator#Definition Retrieved:2015-6-28.
    • Suppose there is a fixed parameter [math]\displaystyle{ \theta \ }[/math] that needs to be estimated. Then an "estimator" is a function that maps the sample space to a set of sample estimates. An estimator of [math]\displaystyle{ \theta \ }[/math] is usually denoted by the symbol [math]\displaystyle{ \widehat{\theta} }[/math] . It is often convenient to express the theory using the algebra of random variables: thus if X is used to denote a random variable corresponding to the observed data, the estimator (itself treated as a random variable) is symbolised as a function of that random variable, [math]\displaystyle{ \widehat{\theta}(X) }[/math] . The estimate for a particular observed dataset (i.e. for X=x) is then [math]\displaystyle{ \widehat{\theta}(x) }[/math] , which is a fixed value. Often an abbreviated notation is used in which [math]\displaystyle{ \widehat{\theta} }[/math] is interpreted directly as a random variable, but this can cause confusion.

2009

  • (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Estimator
    • In Statistics, an estimator is a Statistic (a function of the observable sample data) that is used to estimate an unknown population Parameter (which is called the estimand); an estimate is the result from the actual application of the function to a particular sample of data. Many different estimators are possible for any given parameter. Some criterion is used to choose between the estimators, although it is often the case that a criterion cannot be used to clearly pick one estimator over another.
    • To estimate a parameter of interest (e.g., a population mean, a binomial proportion, a difference between two population means, or a ratio of two population standard deviations), the usual procedure is as follows:
    • There are many types of estimators, including point estimators, interval estimators, density estimators, as well as function estimators.

2007

  • Sargur N. Srihari. (2007). “Introduction to Pattern Recognition." Course Notes
    • QUOTE:
    • An estimator is a random variable Y used to estimate some parameter p of an underlying population.
    • The estimation bias of Y as an estimator for p is the quantity (E[Y]-p). An unbiased estimator is one for which the bias is zero.
    • An N% confidence interval estimate for parameter p is an interval that includes p with probability N%.
    • Definition: The estimation bias of an estimator Y for an arbitrary parameter p is ...
    • If the estimation bias is zero, then Y is an unbiased estimator for p
  • Charles B. Moss. (2007). “Definition of Estimator and Choosing among Estimators: Lecture XVII." Course Notes: Statistics in Food and Resource Economics - AEB 6933. University of Florida.
    • In general, an estimator is a function of the sample, not based on population parameters. First, the estimator is a known function of random variables:
    • The value of an estimator is then a random variable.
      • As any other random variable, it is possible to define the distribution of the estimator based on distribution of the random variables in the sample. These distributions will be used in the next section to define confidence intervals.
      • Any function of the sample is referred to as a statistic.
      • Most of the time in econometrics, we focus on the moments as sample statistics. Specifically, we may be interested in the sample means, or may use the sample covariances with the sample variances to define least squares estimators.
      • We may be interested in the probability of a given die role (for example the probability of a three). If we define a new set of variables, Y, such that Y=1 if X=3 and Y=0 otherwise, the probability of a three becomes:
      • Amemiya demonstrates that this probability could also be derived from the moments of the distribution. Assume that you have a sample of 50 die roles. Compute the sample distribution for each moment k=0,1,2,3,4,5:
      • The method of moments estimate of each probability pi is defined by the solution of the five equation system:

2003

1991

1987