Univariate Gaussian Density Function
A univariate normal density function, [math]\displaystyle{ \mathcal{N}(x | \mu, \sigma) }[/math], is a Gaussian density function that is a univariate probability density function (where [math]\displaystyle{ a = \tfrac{1}{\sqrt{2\pi\sigma^2}} }[/math], [math]\displaystyle{ b = \mu }[/math], and [math]\displaystyle{ c = 2\sigma^2 }[/math]).
- AKA: Normal Probability Function.
- Example(s):
- [math]\displaystyle{ \mathcal{N}(x | 1.1, 0.2) }[/math]
- Counter-Example(s):
- a Non-Normal Distribution, such as Uniform Density Function, or a Gaussian Mixture Function, or an Exponential Density Function.
- See: Central Limit Theorem, t-Student Distribution, Uniform Probability Distribution; Balanced Probability Distribution.
References
2012
- (Wikipedia, 2011) http://en.wikipedia.org/wiki/Normal_distribution
- In probability theory, the normal (or Gaussian) distribution is a continuous probability distribution that has a bell-shaped probability density function, known as the Gaussian function or informally the bell curve:[1] :[math]\displaystyle{ f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 } }[/math]
The parameter μ is the mean or expectation (location of the peak) and σ 2 is the variance. σ is known as the standard deviation. The distribution with μ = 0 and σ 2 = 1 is called the standard normal distribution or the unit normal distribution. A normal distribution is often used as a first approximation to describe real-valued random variables that cluster around a single mean value.
The normal distribution is considered the most prominent probability distribution in statistics. There are several reasons for this:[2] First, the normal distribution arises from the central limit theorem, which states that under mild conditions the mean of a large number of random variables drawn from the same distribution is distributed approximately normally, irrespective of the form of the original distribution. This gives it exceptionally wide application in, for example, sampling. Secondly, the normal distribution is very tractable analytically, that is, a large number of results involving this distribution can be derived in explicit form.
For these reasons, the normal distribution is commonly encountered in practice, and is used throughout statistics, natural sciences, and social sciences[3] as a simple model for complex phenomena. For example, the observational error in an experiment is usually assumed to follow a normal distribution, and the propagation of uncertainty is computed using this assumption. Note that a normally-distributed variable has a symmetric distribution about its mean. Quantities that grow exponentially, such as prices, incomes or populations, are often skewed to the right, and hence may be better described by other distributions, such as the log-normal distribution or Pareto distribution. In addition, the probability of seeing a normally-distributed value that is far (i.e. more than a few standard deviations) from the mean drops off extremely rapidly. As a result, statistical inference using a normal distribution is not robust to the presence of outliers (data that is unexpectedly far from the mean, due to exceptional circumstances, observational error, etc.). When outliers are expected, data may be better described using a heavy-tailed distribution such as the Student's t-distribution.
From a technical perspective, alternative characterizations are possible, for example:
- The normal distribution is the only absolutely continuous distribution all of whose cumulants beyond the first two (i.e. other than the mean and variance) are zero.
- For a given mean and variance, the corresponding normal distribution is the continuous distribution with the maximum entropy.[4][5]
- The normal distributions are a sub-class of the elliptical distributions.
- In probability theory, the normal (or Gaussian) distribution is a continuous probability distribution that has a bell-shaped probability density function, known as the Gaussian function or informally the bell curve:[1] :[math]\displaystyle{ f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 } }[/math]
- ↑ The designation "bell curve" is ambiguous: there are many other distributions which are "bell"-shaped: the Cauchy distribution, Student's t-distribution, generalized normal, logistic, etc.
- ↑ For the proof see Gaussian integral.
- ↑ Gale Encyclopedia of Psychology – Normal Distribution
- ↑ Cover, T. M.; Thomas, Joy A (2006). Elements of information theory. John Wiley and Sons. p. 254.
- ↑ Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics (Elsevier): 219–230. http://www.wise.xmu.edu.cn/Master/Download/..%5C..%5CUploadFiles%5Cpaper-masterdownload%5C2009519932327055475115776.pdf. Retrieved 2011-06-02.
2011
- (Zhang, 2011c) ⇒ Xinhua Zhang. (2011). “Gaussian Distribution.” In: (Sammut & Webb, 2011) p.425
2009
2006
- (Dubnicka, 2006h) ⇒ Suzanne R. Dubnicka. (2006). “The Normal Distribution and Related Distributions - Handout 8." Kansas State University, Introduction to Probability and Statistics I, STAT 510 - Fall 2006.
- TERMINOLOGY : A random variable X is said to have a normal distribution if its pdf is given by fX(x) =
- …
- 0, otherwise.
- Shorthand notation is X N(μ, 2). There are two parameters in the normal distribution: the mean μ and the variance 2.
- TERMINOLOGY : A random variable X is said to have a normal distribution if its pdf is given by fX(x) =