# Continuous Probability Distribution Family

A Continuous Probability Distribution Family is a probability distribution family that defines a set of continuous probability functions.

**Context:**- It can range from being a Bounded Interval Continuous Probability Distribution Family to being an Semi-Infinite Interval Support Continuous Probability Distribution Family to being a Whole Real Line Continuous Probability Distribution Family to being a Continuous Probability Distribution Family with Variable Support.
- It can range from being a Symmetric Continuous Probability Distribution Family to being a Skewed Continuous Probability Distribution Family.
- …

**Example(s):**- a Uniform Continuous Probability Distribution.
- an Exponential Probability Distribution, such as a Gaussian Distribution(
*x, μ, σ*). - a Chi Probability Distribution Family.
- a Gamma Probability Distribution.
- a Weibull Probability Distribution.
- a Beta Probability Distribution.
- a Laplace Probability Distribution.
- a Power Law Probability Distribution.
- a Log-Normal Distribution.
- a K Probability Distribution.
- …

**Counter-Example(s):****See:**Conditional Probability Distribution, Statistical Model Family, Continuous Probability Model Fitting, Mixture Probability Distribution.

## References

### 2015

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/probability_density_function#Families_of_densities Retrieved:2015-6-24.
- IIt is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math] respectively, giving the family of densities : [math]\displaystyle{ f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 }. }[/math] It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of
*something*in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.Since the parameters are constants, reparametrizing a density in terms of different parameters, to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Changing the domain of a probability density, however, is trickier and requires more work: see the section below on change of variables.

- IIt is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by [math]\displaystyle{ \mu }[/math] and [math]\displaystyle{ \sigma^2 }[/math] respectively, giving the family of densities : [math]\displaystyle{ f(x;\mu,\sigma^2) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2 }. }[/math] It is important to keep in mind the difference between the domain of a family of densities and the parameters of the family. Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of

- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/probability_distribution#Continuous_probability_distribution Retrieved:2015-6-2.
- A
**continuous probability distribution**is a*probability distribution*that has a probability density function. Mathematicians also call such a distribution 'absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measure*λ*. If the distribution of*X*is continuous, then X is called a**continuous random variable**. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others.Intuitively, a continuous random variable is the one which can take a continuous range of values — as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g., rolling on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3½ cm is possible; however, it has probability zero because uncountably many other potential values exist even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval is nonzero. This apparent paradox is resolved by the fact that the probability that

*X*attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero.Formally, if X is a continuous random variable, then it has a probability density function

*ƒ*(*x*), and therefore its probability of falling into a given interval, say is given by the integral : [math]\displaystyle{ \Pr[a\le X\le b] = \int_a^b f(x) \, dx }[/math] In particular, the probability for*X*to take any single value a*(that is ) is zero, because an integral with coinciding upper and lower limits is always equal to zero.**The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cumulative distribution function, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor a mixture of those. An example is given by the Cantor distribution. Such singular distributions however are never encountered in practice.**Note on terminology: some authors use the term "continuous distribution" to denote the distribution with continuous cumulative distribution function. Thus, their definition includes both the (absolutely) continuous and singular distributions.**By one convention, a probability distribution [math]\displaystyle{ \,\mu }[/math] is called*continuous*if its cumulative distribution function [math]\displaystyle{ F(x)=\mu(-\infty,x] }[/math] is continuous and, therefore, the probability measure of singletons [math]\displaystyle{ \mu\{x\}\,=\,0 }[/math] for all [math]\displaystyle{ \,x }[/math] .**Another convention reserves the term*continuous probability distribution for absolutely continuous distributions. These distributions can be characterized by a probability density function: a non-negative Lebesgue integrable function [math]\displaystyle{ \,f }[/math] defined on the real numbers such that : [math]\displaystyle{ F(x) = \mu(-\infty,x] = \int_{-\infty}^x f(t)\,dt. }[/math] Discrete distributions and some continuous distributions (like the Cantor distribution) do not admit such a density.

- A