Weak Law of Large Numbers

From GM-RKB
Jump to navigation Jump to search

A Weak Law of Large Numbers is a Probability Theory that ...



References

2016

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Law_of_large_numbers#Weak_law Retrieved:2016-6-3.
    • The weak law of large numbers (also called Khintchine's law) states that the sample average converges in probability towards the expected value[1] [math]\displaystyle{ \begin{matrix}{}\\ \overline{X}_n\ \xrightarrow{P}\ \mu \qquad\textrm{when}\ n \to \infty. \\{}\end{matrix} }[/math] That is to say that for any positive number ε, : [math]\displaystyle{ \lim_{n\to\infty}\Pr\!\left(\,|\overline{X}_n-\mu| \gt \varepsilon\,\right) = 0. }[/math]

      Interpreting this result, the weak law essentially states that for any nonzero margin specified, no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin.

      Convergence in probability is also called weak convergence of random variables. This version is called the weak law because random variables may converge weakly (in probability) as above without converging strongly (almost surely) as below.

      As mentioned earlier, the weak law applies in the case of independent identically distributed random variables having an expected value. But it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshov as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshov's proof works so long as the variance of the average of the first n values goes to zero as n goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution with mean zero, but with variance equal to [math]\displaystyle{ 2n/\log(n+1). }[/math] At each stage, the average will be normally distributed (since it is the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to [math]\displaystyle{ n^2/\log n }[/math]. The variance of the average is therefore asymptotic to [math]\displaystyle{ 1/\log n }[/math] and goes to zero.

      An example where the law of large numbers does not apply is the Cauchy distribution. Let the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median is zero, but the expected value does not exist, and indeed the average of n such variables has the same distribution as one such variable. It does not tend toward zero as n goes to infinity.

      There are also examples of the weak law applying even though the expected value does not exist. See #Differences between the weak law and the strong law.