Law of Total Expectation

From GM-RKB
Jump to navigation Jump to search

A Law of Total Expectation is a theorem that states [math]\displaystyle{ \operatorname{E} (X) = \operatorname{E} (\operatorname{E} (X \mid Y)) }[/math].



References

2018a

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Law_of_total_expectation Retrieved:2018-9-9.
    • The proposition in probability theory known as the law of total expectation, the law of iterated expectations, the tower rule, , Adam's law, and the smoothing theorem, among other names, states that if [math]\displaystyle{ X }[/math] is a random variable whose expected value [math]\displaystyle{ \operatorname{E}(X) }[/math] is defined, and [math]\displaystyle{ Y }[/math] is any random variable on the same probability space, then : [math]\displaystyle{ \operatorname{E} (X) = \operatorname{E} (\operatorname{E} (X \mid Y)), }[/math] i.e., the expected value of the conditional expected value of [math]\displaystyle{ X }[/math] given [math]\displaystyle{ Y }[/math] is the same as the expected value of [math]\displaystyle{ X }[/math] .

      One special case states that if [math]\displaystyle{ {\left\{A_i\right\}}_i }[/math] is a finite or countable partition of the sample space, then : [math]\displaystyle{ \operatorname{E} (X) = \sum_i{\operatorname{E}(X \mid A_i) \operatorname{P}(A_i)}. }[/math]

2018b

  • (Proof Wiki, 2018) ⇒ https://proofwiki.org/wiki/Total_Expectation_Theorem Retrieved:2018-9-9.
    • QUOTE: Let [math]\displaystyle{ \mathcal E = \left({\Omega, \Sigma, \Pr}\right) }[/math] be a probability space.

      Let [math]\displaystyle{ x }[/math] be a discrete random variable on [math]\displaystyle{ \mathcal E }[/math].

      Let [math]\displaystyle{ \left\{{B_1 \mid B_2 \mid \cdots}\right\} }[/math] be a partition of [math]\displaystyle{ \omega }[/math] such that [math]\displaystyle{ \Pr \left({B_i}\right) \gt 0 }[/math] for each [math]\displaystyle{ i }[/math].

      Then:

      [math]\displaystyle{ \displaystyle E \left({X}\right) = \sum_i E \left({X \mid B_i}\right) \Pr \left({B_i}\right) }[/math]

      whenever this sum converges absolutely.

      In the above:

      [math]\displaystyle{ E \left({X}\right) }[/math] denotes the expectation of [math]\displaystyle{ X }[/math]

      [math]\displaystyle{ E \left({X \mid B_i}\right) }[/math] denotes the conditional expectation of [math]\displaystyle{ X }[/math] given [math]\displaystyle{ B_i }[/math].