Statistical Independence Relationship

From GM-RKB
(Redirected from independent distribution)
Jump to navigation Jump to search

A Statistical Independence Relationship is a binary relationship that determines whether the Probability Distribution of one Random Variable is unrelated to the Probability Distribution of the other Random Variable.



References

2020

  • (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Independence_(probability_theory) Retrieved:2020-2-1.
    • This is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes.

      Two events are independent, statistically independent, or stochastically independent[1] if the occurrence of one does not affect the probability of occurrence of the other (equivalently, does not affect the odds). Similarly, two random variables are independent if the realization of one does not affect the probability distribution of the other.

      When dealing with collections of more than two events, a weak and a strong notion of independence need to be distinguished. The events are called pairwise independent if any two events in the collection are independent of each other, while saying that the events are mutually independent (or collectively independent) intuitively means that each event is independent of any combination of other events in the collection. Similar notions for collections of random variables.

      The name "mutual independence" (same as "collective independence") seems the outcome of a pedagogical choice, merely to distinguish the stronger notion from "pairwise independence" which is a weaker notion. In the advanced literature of probability theory, statistics and stochastic processes, the stronger notion is simply named independence with no modifier. It is stronger since independence implies pairwise independence, but not the other way around.

  1. Russell, Stuart; Norvig, Peter (2002). Artificial Intelligence: A Modern Approach. Prentice Hall. p. 478. ISBN 0-13-790395-2.

2009

  • (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Statistical_independence
    • In probability theory, to say that two events are independent intuitively means that the occurrence of one event makes it neither more nor less probable that the other occurs. For example:
      • The event of getting a 6 the first time a die is rolled and the event of getting a 6 the second time are independent.
      • By contrast, the event of getting a 6 the first time a die is rolled and the event that the sum of the numbers seen on the first and second trials is 8 are dependent.
      • If two cards are drawn with replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are independent.
      • By contrast, if two cards are drawn without replacement from a deck of cards, the event of drawing a red card on the first trial and that of drawing a red card on the second trial are dependent.
    • Similarly, two random variables are independent if the conditional probability distribution of either given the observed value of the other is the same as if the other's value had not been observed. The concept of independence extends to dealing with collections of more than two events or random variables.

2006b

  • (Dubnicka, 2006a) ⇒ Suzanne R. Dubnicka. (2006). “STAT 510: Handout 1 - Probability Terminology. Kansas State University
    • QUOTE: We call two events A and B mutually exclusive, or disjoint, if A \ B = ; so that they have no outcomes in common. Thus, if A occurs then B cannot occur. Extending this definition to a finite or countable collection of sets is obvious.

2006b

  • (Dubnicka, 2006b) ⇒ Suzanne R. Dubnicka. (2006). “STAT 510: Handout 2 - Counting Techniques and More Probabililty. Kansas State University
    • QUOTE : When the occurrence or non-occurrence of A has no effect on whether or not B occurs, and vice-versa, we say that the events A and B are independent. Mathematically, we define A and B to be independent iff (if and only if) :[math]\displaystyle{ P(A ∩ B) = P(A)P(B). }[/math] Otherwise, A and B are called dependent events. Note that if A and B are independent, [math]\displaystyle{ P(A|B) = P(A ∩ B) / P(B) = P(A)P(B)/P(B) = P(A) }[/math] and [math]\displaystyle{ P(B|A) = P(B ∩ A) / P(A) = P(B)P(A) / P(A) = P(B). }[/math]