Conditional Probability Function

From GM-RKB
(Redirected from Conditional distribution)
Jump to navigation Jump to search

A conditional probability function is a probability function, [math]\displaystyle{ P(X|Y) }[/math], that reports the probability that event [math]\displaystyle{ x \in X }[/math] will occur given that event [math]\displaystyle{ y \in Y }[/math] occurs.



References

2015

  • (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/conditional_probability Retrieved:2015-6-2.
    • In probability theory, a conditional probability measures the probability of an event given that (by assumption, presumption, assertion or evidence)

      another event has occurred.

      For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, then they are much more likely to be coughing. The conditional probability of coughing given that you have a cold might be a much higher 75%.

      If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B”, or "the probability of A under the condition B”, is usually written as P(A|B), or sometimes P(A).

      The concept of conditional probability is one of the most fundamental and one of the most important concepts in probability theory.[1]

      But conditional probabilities can be quite slippery and require careful interpretation.[2] For example, there need not be a causal or temporal relationship between A and B.

      In general P(A|B) is not equal to P(B|A). For example, if you have cancer you might have a 90% chance of testing positive for cancer, but if you test negative for cancer you might have only a 10% chance of actually having cancer because cancer is very rare. Falsely equating the two probabilities causes various errors of reasoning such as the base rate fallacy. Conditional probabilities can be correctly reversed using Bayes' Theorem.

      P(A|B) (the conditional probability of A given B) may or may not be equal to P(A) (the unconditional probability of A). If P(A|B) = P(A), A and B are said to be independent.

  1. Sheldon Ross, A First Course in Probability, 8th Edition (2010), Pearson Prentice Hall, ISBN 978-0-13-603313-4
  2. George Casella and Roger L. Berger, Statistical Inference,(2002), Duxbury Press, ISBN 978-0-534-24312-8

2005

2004

2001