# Negative Binomial Probability Distribution

A Negative Binomial Probability Distribution is a discrete probability distribution that can be an instantiation of a Negative Binomial Mass Function.

## References

### 2012

• http://en.wikipedia.org/wiki/Negative_binomial_distribution
• QUOTE: In probability theory and statistics, the negative binomial distribution is a discrete probability distribution of the number of successes in a sequence of Bernoulli trials before a specified (non-random) number of failures (denoted r) occur. For example, if one throws a die repeatedly until the third time “1” appears, then the probability distribution of the number of non-“1”s that had appeared will be negative binomial.

The Pascal distribution (after Blaise Pascal) and Polya distribution (for George Pólya) are special cases of the negative binomial. There is a convention among engineers, climatologists, and others to reserve “negative binomial” in a strict sense or “Pascal” for the case of an integer-valued stopping-time parameter r, and use “Polya” for the real-valued case. The Polya distribution more accurately models occurrences of “contagious” discrete events, like tornado outbreaks, than the Poisson distribution.

• http://en.wikipedia.org/wiki/Negative_binomial_distribution#Definition
• QUOTE: Suppose there is a sequence of independent Bernoulli trials, each trial having two potential outcomes called “success” and “failure”. In each trial the probability of success is p and of failure is (1 − p). We are observing this sequence until a predefined number r of failures has occurred. Then the random number of successes we have seen, X, will have the negative binomial (or Pascal) distribution: :$\displaystyle{ X\ \sim\ \text{NB}(r,\, p) }$

When applied to real-world situations, the words success and failure need not necessarily be associated with outcomes which we see as good or bad. Say in one case we may use the negative binomial distribution to model the number of days a certain machine works before it breaks down. In such a case “success” would mean the machine was working properly, whereas “failure” would mean it broke down. In another case we can use the negative binomial distribution to model the number of attempts needed for a sportsman to score a goal. Then “failure” would be his/her scoring the goal, whereas “successes” are misses.

The probability mass function of the negative binomial distribution is :$\displaystyle{ f(k) \equiv \Pr(X = k) = {k+r-1 \choose k} (1-p)^r p^k \quad\text{for }k = 0, 1, 2, \dots }$

Here the quantity in parentheses is the binomial coefficient, and is equal to

$\displaystyle{ {k+r-1 \choose k} = \frac{(k+r-1)!}{k!\,(r-1)!} = \frac{(k+r-1)(k+r-2)\cdots(r)}{k!}. }$

This quantity can alternatively be written in the following manner, explaining the name “negative binomial”: :$\displaystyle{ \frac{(k+r-1)\cdots(r)}{k!} = (-1)^k \frac{(-r)(-r-1)(-r-2)\cdots(-r-k+1)}{k!} = (-1)^k{-r \choose k}. \qquad (*) }$

To understand the above definition of the probability mass function, note that the probability for every specific sequence of k successes and r failures is (1 − p)rpk, because the outcomes of the k + r trials are supposed to happen independently. Since the rth failure comes last, it remains to choose the k trials with successes out of the remaining k + r − 1 trials. The above binomial coefficient, due to its combinatorial interpretation, gives precisely the number of all these sequences of length k + r − 1.