# Statistical Independence Relationship

A Statistical Independence Relationship is a binary relationship that determines whether the Probability Distribution of one Random Variable is unrelated to the Probability Distribution of the other Random Variable.

**AKA:**Independence Relation.**Context:**- It can range from being an N-Event Statistical Independence Relationship to being a N-Random Variable Statistical Independence Relationship.
- It can range from being a Stochastic Process Independence Relationship to being a Independent $\sigma$-algebras Relation.

**Example(s):**- In a Binomial Process, such as a string of Coin Toss Experiment, each Random Variable representing the Outcome of each Experiment is an a Statistical Independence Relation with each other.
- an Event Self-Independence Relation: $\mathrm {P} (A)=\mathrm {P} (A\cap A)=\mathrm {P} (A)\cdot \mathrm {P} (A)$.
- a 2-Event Statistical Independence Relation: $\mathrm{P}(A \cap B) = \mathrm{P}(A)\mathrm{P}(B)$.
- a 3-Event Mutual Independence Relation: $\mathrm{P}(A \cap B \cap C) = \mathrm{P}(A)\mathrm{P}(B)\mathrm{P}(C)$.
- an N-Event Pairwise Independence Relation: $\mathrm {P} (A_{m}\cap A_{k})=\mathrm {P} (A_{m})\mathrm {P} (A_{k})$ for all distinct pairs of indices $m$,$k$ of the finite set of events $\{A_{i}\}_{i=1}^{n}$.
- an N-Event Mutual Independence Relation: $\mathrm {P} \left(\bigcap _{i=1}^{k}B_{i}\right)=\prod _{i=1}^{k}\mathrm {P} (B_{i})$ for every $k\leq n$ and for every $k$-element subset of finite sets of events $\{B_{i}\}_{i=1}^{k}$ and $\{A_{i}\}_{i=1}^{n}$.
- a 2-Random Variable Independence Relation: $f_{X,Y}(x,y)=f_{X}(x)f_{Y}(y)$ for all $x$,$y$.
- a Random Variables Pairwise Independence Relation.
- …

**Counter-Example(s):****See:**IID Random Variable Set, Independence Assumption, Mutual Independence Relation, Distinct Set Relation.

## References

### 2020

- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Independence_(probability_theory) Retrieved:2020-2-1.
- This is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes.
Two events are

**independent**,**statistically independent**, or**stochastically independent**^{[1]}if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are

**mutually independent**(or**collectively independent**) intuitively means that each event is independent of any combination of other events in the collection. Similar notions for collections of random variables.The name "mutual independence" (same as "collective independence") seems the outcome of a pedagogical choice, merely to distinguish the stronger notion from "pairwise independence" which is a weaker notion. In the advanced literature of probability theory, statistics and stochastic processes, the stronger notion is simply named

**independence**with no modifier. It is stronger since independence implies pairwise independence, but not the other way around.

- This is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes.

- ↑ Russell, Stuart; Norvig, Peter (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. p. 478. ISBN 0-13-790395-2.

### 2009

- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Statistical_independence
- In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:
- The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent.
- By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trials is 8 are dependent.
- If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent.
- By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are dependent.

- Similarly, two random variables are independent if the conditional probability distribution of either given the observed value of the other is the same as if the other's value had not been observed. The concept of independence extends to dealing with collections of more than two events or random variables.

- In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:

### 2006b

- (Dubnicka, 2006a) ⇒ Suzanne R. Dubnicka. (2006). “STAT 510: Handout 1 - Probability Terminology. Kansas State University
- QUOTE: We call two events A and B mutually exclusive, or disjoint, if A \ B = ; so that they have no outcomes in common. Thus, if A occurs then B cannot occur. Extending this definition to a finite or countable collection of sets is obvious.

### 2006b

- (Dubnicka, 2006b) ⇒ Suzanne R. Dubnicka. (2006). “STAT 510: Handout 2 - Counting Techniques and More Probabililty. Kansas State University
- QUOTE : When the occurrence or non-occurrence of A has no effect on whether or not B occurs, and vice-versa, we say that the events A and B are independent. Mathematically, we define A and B to be independent iff (if and only if) :[math] P(A ∩ B) = P(A)P(B).[/math] Otherwise, A and B are called dependent events. Note that if A and B are independent, [math] P(A|B) = P(A ∩ B) / P(B) = P(A)P(B)/P(B) = P(A)[/math] and [math]P(B|A) = P(B ∩ A) / P(A) = P(B)P(A) / P(A) = P(B).[/math]