Binomial Probability Function
Jump to navigation
Jump to search
A binomial probability function is a finite-support categorical probability function for a binomial random variable
- Context:
- input:
- p: the Probability Mass Function for each Binomial Trial.
- [math]\displaystyle{ n }[/math]: the Random Experiment Count.
- [math]\displaystyle{ k }[/math]: the Success Count.
- [math]\displaystyle{ \mid\mid\Omega\mid\mid }[/math]: the Sample Space Sample Size.
- range:
- It can be a member of a Binomial Probability Distribution Family.
- It can be expressed as [math]\displaystyle{ f(k;p) = \begin{cases} p & \text{if }k=1, \\ 1-p & \text {if }k=0.\end{cases} }[/math]
- It can (typically) be a member of a Binomial Probability Function Family.
- It has Arithmetic Mean of [math]\displaystyle{ E(Z) = Np }[/math].
- It has Arithmetic Variance of [math]\displaystyle{ E(Z) = Np }[/math].
- It can be instantiated as a Binomial Probability Distribution Structure, or a Binomial Probability Software Function.
- It can range from being a Negative Binomial Mass Function to being a Positive Binomial Mass Function.
- …
- input:
- Example(s):
- [math]\displaystyle{ f(k;p=0.51) = \begin{cases} 0.51 & \text{if }k=1, \\ 0.49 & \text {if }k=0.\end{cases} }[/math]
- [math]\displaystyle{ P(\mid\mid\Omega\mid\mid=2, n=1, k=1) = \frac{1}{2} }[/math]
- [math]\displaystyle{ P(\mid\mid\Omega\mid\mid=2, n=1, k=2) = \frac{2}{4} }[/math]
- Counter-Example(s):
- a Multinomial Mass Function, such as [math]\displaystyle{ P(\mid\mid\Omega\mid\mid=3, n=2, k=1) = \frac{4}{9} }[/math]
- a Gaussian Probability Function.
- See: Binomial Trial, Cumulative Distribution Function, Bimodal Distribution.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Bernoulli_distribution Retrieved:2014-10-28.
- In probability theory and statistics, the Bernoulli distribution, named after Swiss scientist Jacob Bernoulli, is the probability distribution of a random variable which takes value 1 with success probability [math]\displaystyle{ p }[/math] and value 0 with failure probability [math]\displaystyle{ q=1-p }[/math]. It can be used, for example, to represent the toss of a coin, where "1" is defined to mean "heads" and "0" is defined to mean "tails" (or vice versa).
- parameters =[math]\displaystyle{ 0\lt p\lt 1, p\in\R }[/math]|
- support =[math]\displaystyle{ k \in \{0,1\}\, }[/math]|
- pdf =[math]\displaystyle{ \begin{cases} q=(1-p) & \text{for }k=0 \\ p & \text{for }k=1 \end{cases} }[/math]
- cdf =[math]\displaystyle{ \begin{cases} 0 & \text{for }k\lt 0 \\ q & \text{for }0\leq k\lt 1 \\ 1 & \text{for }k\geq 1 \end{cases} }[/math]
- mean = [math]\displaystyle{ p\, }[/math]
- median = [math]\displaystyle{ \begin{cases} 0 & \text{if } q \gt p\\ 0.5 & \text{if } q=p\\ 1 & \text{if } q\lt p \end{cases} }[/math]
- mode = [math]\displaystyle{ \begin{cases} 0 & \text{if } q \gt p\\ 0, 1 & \text{if } q=p\\ 1 & \text{if } q \lt p \end{cases} }[/math]
- variance = [math]\displaystyle{ p(1-p)\, }[/math]|
- skewness = [math]\displaystyle{ \frac{q-p}{\sqrt{pq}} }[/math]|
- kurtosis = [math]\displaystyle{ \frac{1-6pq}{pq} }[/math]|
- entropy = [math]\displaystyle{ -q\ln(q)-p\ln(p)\, }[/math]|
- mgf = [math]\displaystyle{ q+pe^t\, }[/math]|
- char =[math]\displaystyle{ q+pe^{it}\, }[/math]|
- pgf =[math]\displaystyle{ q+pz\, }[/math]|
- fisher = [math]\displaystyle{ \frac{1}{p(1-p)} }[/math]|
2011
- (Wikipedia, 2011) ⇒ http://en.wikipedia.org/wiki/Binomial_distribution
- In probability theory and statistics, the 'binomial distribution is the discrete probability distribution of the number of successes in a sequence of [math]\displaystyle{ n }[/math] independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when [math]\displaystyle{ n }[/math] = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size [math]\displaystyle{ n }[/math] drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for [math]\displaystyle{ N }[/math] much larger than n, the binomial distribution is a good approximation, and widely used. In general, if the random variable [math]\displaystyle{ K }[/math] follows the binomial distribution with parameters [math]\displaystyle{ n }[/math] and [math]\displaystyle{ p }[/math], we write K ~ B(n, p). The probability of getting exactly [math]\displaystyle{ k }[/math] successes in [math]\displaystyle{ n }[/math] trials is given by the probability mass function: [math]\displaystyle{ f(k;n,p) = \Pr(K = k) = {n\choose k}p^k(1-p)^{n-k} }[/math] for k = 0, 1, 2, ..., n, where [math]\displaystyle{ {n\choose k}=\frac{n!}{k!(n-k)!} }[/math] is the binomial coefficient (hence the name of the distribution). “n choose k”, also denoted C(n, k), nCk, or nCk. The formula can be understood as follows: we want [math]\displaystyle{ k }[/math] successes (pk) and n − k failures (1 − p)n − k. However, the [math]\displaystyle{ k }[/math] successes can occur anywhere among the [math]\displaystyle{ n }[/math] trials, and there are C(n, k) different ways of distributing [math]\displaystyle{ k }[/math] successes in a sequence of [math]\displaystyle{ n }[/math] trials. In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. A binomial distribution should not be confused with a bimodal distribution. It is frequently used to model number of successes in a sample of size n from a population of size N. Since the samples are not independent (this is sampling without replacement), the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than [math]\displaystyle{ n }[/math], the binomial distribution is a good approximation, and widely used. In general, if the random variable K follows the binomial distribution with parameters n and p, we write K ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:
- $Pr(K = k) = f(k;n,p)$
- $Pr(K = k) = {n\choose k}p^k(1-p)^{n-k}$
- for k = 0, 1, 2, ..., n and where ${n\choose k}=\frac{n!}{k!(n-k)!}$ is the binomial coefficient (hence the name of the distribution) "n choose k", also denoted C(n, k), nCk, or nCk. The formula can be understood as follows: we want k successes (pk) and n − k failures (1 − p)n − k. However, the k successes can occur anywhere among the n trials, and there are C(n, k) different ways of distributing k successes in a sequence of n trials.
- In probability theory and statistics, the 'binomial distribution is the discrete probability distribution of the number of successes in a sequence of [math]\displaystyle{ n }[/math] independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when [math]\displaystyle{ n }[/math] = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size [math]\displaystyle{ n }[/math] drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for [math]\displaystyle{ N }[/math] much larger than n, the binomial distribution is a good approximation, and widely used. In general, if the random variable [math]\displaystyle{ K }[/math] follows the binomial distribution with parameters [math]\displaystyle{ n }[/math] and [math]\displaystyle{ p }[/math], we write K ~ B(n, p). The probability of getting exactly [math]\displaystyle{ k }[/math] successes in [math]\displaystyle{ n }[/math] trials is given by the probability mass function: [math]\displaystyle{ f(k;n,p) = \Pr(K = k) = {n\choose k}p^k(1-p)^{n-k} }[/math] for k = 0, 1, 2, ..., n, where [math]\displaystyle{ {n\choose k}=\frac{n!}{k!(n-k)!} }[/math] is the binomial coefficient (hence the name of the distribution). “n choose k”, also denoted C(n, k), nCk, or nCk. The formula can be understood as follows: we want [math]\displaystyle{ k }[/math] successes (pk) and n − k failures (1 − p)n − k. However, the [math]\displaystyle{ k }[/math] successes can occur anywhere among the [math]\displaystyle{ n }[/math] trials, and there are C(n, k) different ways of distributing [math]\displaystyle{ k }[/math] successes in a sequence of [math]\displaystyle{ n }[/math] trials. In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Such a success/failure experiment is also called a Bernoulli experiment or Bernoulli trial. In fact, when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. A binomial distribution should not be confused with a bimodal distribution. It is frequently used to model number of successes in a sample of size n from a population of size N. Since the samples are not independent (this is sampling without replacement), the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than [math]\displaystyle{ n }[/math], the binomial distribution is a good approximation, and widely used. In general, if the random variable K follows the binomial distribution with parameters n and p, we write K ~ B(n, p). The probability of getting exactly k successes in n trials is given by the probability mass function:
2002
- QuickCalcs Online Calculator: http://www.graphpad.com/quickcalcs/probability1.cfm
- QUOTE: The binomial distribution applies when there are two possible outcomes. You know the probability of obtaining either outcome (traditionally called "success" and "failure") and want to know the chance of obtaining a certain number of successes in a certain number of trials.
- How many trials (or subjects) per experiment?
- What is the probability of "success" in each trial or subject?
- QUOTE: The binomial distribution applies when there are two possible outcomes. You know the probability of obtaining either outcome (traditionally called "success" and "failure") and want to know the chance of obtaining a certain number of successes in a certain number of trials.
1997
- (Borwein, Watters, & Borowski, 1997).
- QUOTE: [A Binomial Mass Function is] a statistical distribution giving the probability of obtaining a specific number of successes in a binomial experiment.