# Standard Deviation Measure

(Redirected from standard deviation)

A Standard Deviation Measure is a point estimation function that quantifies the statistical dispersion of a random variable around its mean value (in the form of a standard deviation value).

## References

### 2015

• (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/standard_deviation Retrieved:2015-5-3.
• In statistics, the standard deviation (SD) (represented by the Greek letter sigma, σ) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A standard deviation close to 0 indicates that the data points tend to be very close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.

The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice less robust, than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data. Note, however, that for measurements with percentage as the unit, the standard deviation will have percentage points as the unit. There are also other measures of deviation from the norm, including mean absolute deviation, which provide different mathematical properties from standard deviation.

In addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported margin of error is typically about twice the standard deviation — the half-width of a 95 percent confidence interval. In science, researchers commonly report the standard deviation of experimental data, and only effects that fall much farther than two standard deviations away from what would have been expected are considered statistically significant — normal random error or variation in the measurements is in this way distinguished from causal variation. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the volatility of the investment.

When only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above-mentioned quantity as applied to those data or to a modified quantity that is a better estimate of the population standard deviation (the standard deviation of the entire population).

### 2009

• (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Standard_deviation
• In probability theory and statistics, the standard deviation of a statistical population, a data set, or a probability distribution is the square root of its variance. Standard deviation is a widely used measure of the variability or dispersion, being algebraically more tractable though practically less robust than the expected deviation or average absolute deviation.

It shows how much variation there is from the "average" (mean). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data are spread out over a large range of values.

• Let X be a random variable with mean value μ:
• E[X] = μ
• Here $E$ denotes the average or expected value ofX. Then the standard deviation of $X$ is the quantity
• ∑ = \sqrt{E[(X - μ)2]}.
• That is, the standard deviation σ is the square root of the average value of (X – μ)2.
• In the case where X takes random values from a finite data set x1, x2, ..., xN, with each value having the same probability, the standard deviation is ∑ = \sqrt{\frac{(x_1-μ)2 + (x_2-μ)2 + … + (x_N - μ)2}{N}},
• or, using summation notation, ∑ = \sqrt{\frac{1}{N} ∑{i=1}N (x_i - μ)2},
• The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation, since these expected values need not exist. For example, the standard deviation of a random variable which follows a Cauchy distribution is undefined because its E(X) is undefined.