# Binomial Probability Function

## References

### 2014

• (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Bernoulli_distribution Retrieved:2014-10-28.
• In probability theory and statistics, the Bernoulli distribution, named after Swiss scientist Jacob Bernoulli, is the probability distribution of a random variable which takes value 1 with success probability $\displaystyle{ p }$ and value 0 with failure probability $\displaystyle{ q=1-p }$. It can be used, for example, to represent the toss of a coin, where "1" is defined to mean "heads" and "0" is defined to mean "tails" (or vice versa).
• parameters =$\displaystyle{ 0\lt p\lt 1, p\in\R }$|
• support =$\displaystyle{ k \in \{0,1\}\, }$|
• pdf =$\displaystyle{ \begin{cases} q=(1-p) & \text{for }k=0 \\ p & \text{for }k=1 \end{cases} }$
• cdf =$\displaystyle{ \begin{cases} 0 & \text{for }k\lt 0 \\ q & \text{for }0\leq k\lt 1 \\ 1 & \text{for }k\geq 1 \end{cases} }$
• mean = $\displaystyle{ p\, }$
• median = $\displaystyle{ \begin{cases} 0 & \text{if } q \gt p\\ 0.5 & \text{if } q=p\\ 1 & \text{if } q\lt p \end{cases} }$
• mode = $\displaystyle{ \begin{cases} 0 & \text{if } q \gt p\\ 0, 1 & \text{if } q=p\\ 1 & \text{if } q \lt p \end{cases} }$
• variance = $\displaystyle{ p(1-p)\, }$|
• skewness = $\displaystyle{ \frac{q-p}{\sqrt{pq}} }$|
• kurtosis = $\displaystyle{ \frac{1-6pq}{pq} }$|
• entropy = $\displaystyle{ -q\ln(q)-p\ln(p)\, }$|
• mgf = $\displaystyle{ q+pe^t\, }$|
• char =$\displaystyle{ q+pe^{it}\, }$|
• pgf =$\displaystyle{ q+pz\, }$|
• fisher = $\displaystyle{ \frac{1}{p(1-p)} }$|

### 2011

• (Wikipedia, 2011) ⇒ http://en.wikipedia.org/wiki/Binomial_distribution
• In probability theory and statistics, the 'binomial distribution is the discrete probability distribution of the number of successes in a sequence of $\displaystyle{ n }$ independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when $\displaystyle{ n }$ = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size $\displaystyle{ n }$ drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for $\displaystyle{ N }$ much larger than n, the binomial distribution is a good approximation, and widely used. In general, if the random variable $\displaystyle{ K }$ follows the binomial distribution with parameters $\displaystyle{ n }$ and $\displaystyle{ p }$, we write K ~ B(n, p). The probability of getting exactly $\displaystyle{ k }$ successes in $\displaystyle{ n }$ trials is given by the probability mass function: $\displaystyle{ f(k;n,p) = \Pr(K = k) = {n\choose k}p^k(1-p)^{n-k} }$ for k = 0, 1, 2, ..., n, where $\displaystyle{ {n\choose k}=\frac{n!}{k!(n-k)!} }$ is the binomial coefficient (hence the name of the distribution). “n choose k”, also denoted C(n, k), nCk, or nCk. The formula can be understood as follows: we want $\displaystyle{ k }$ successes (pk) and nk failures (1 − p)n − k. However, the $\displaystyle{ k }$ successes can occur anywhere among the $\displaystyle{ n }$ trials, and there are C(n, k) different ways of distributing $\displaystyle{ k }$ successes in a sequence of $\displaystyle{ n }$ trials. In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. A binomial distribution should not be confused with a bimodal distribution. It is frequently used to model number of successes in a sample of size n from a population of size N. Since the samples are not independent (this is sampling without replacement), the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than $\displaystyle{ n }$, the binomial distribution is a good approximation, and widely used. In general, if the random variable K follows the binomial distribution with parameters n and p, we write K ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:
• $Pr(K = k) = f(k;n,p)$
• $Pr(K = k) = {n\choose k}p^k(1-p)^{n-k}$
• for k = 0, 1, 2, ..., n and where ${n\choose k}=\frac{n!}{k!(n-k)!}$ is the binomial coefficient (hence the name of the distribution) "n choose k", also denoted C(n, k), nCk, or nCk. The formula can be understood as follows: we want k successes (pk) and n − k failures (1 − p)n − k. However, the k successes can occur anywhere among the n trials, and there are C(n, k) different ways of distributing k successes in a sequence of n trials.