Central Limit Theorem

From GM-RKB
(Redirected from central limit theorem)
Jump to navigation Jump to search

A Central Limit Theorem is a probability theorem which states that for a set [math]\displaystyle{ X }[/math] of [math]\displaystyle{ n }[/math] number of independent and identically distributed random variable sample (each with expected value [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ \sigma^2 }[/math]) of sufficiently large set size [math]\displaystyle{ n }[/math], the probability distribution of the sample mean [math]\displaystyle{ \bar X }[/math] of is approximately normal (with mean [math]\displaystyle{ \mu }[/math] and variance [math]\displaystyle{ {1}{n} \sigma^2 }[/math]), and the sample total distribution is approximately normal with mean [math]\displaystyle{ n\mu }[/math], and variance [math]\displaystyle{ n\sigma^2 }[/math]

  • AKA: CLT.
  • Context:
  • Example(s):
    • In an experiment of throwing of a die, let us say 1000 samples [math]\displaystyle{ S_1, S_2, S_3,\dots\, S_{1000} }[/math]have been taken. Each sample [math]\displaystyle{ S_i }[/math] is of sample size [math]\displaystyle{ n=5 }[/math]. That is if a die thrown five times and the output showed up as 1, 3, 3, 4, 2; then [math]\displaystyle{ S_1=[1,3,3,4,2] }[/math]. Similarly in the next five throws the output showed up as 1, 1, 2, 6, 6;then [math]\displaystyle{ S_2=[1,1,2,6,6] }[/math] and so on. Now writing the samples and their respective means we get

      [math]\displaystyle{ S_1=[1,3,3,4,2]; \mu_1=2.6 }[/math](mean of [math]\displaystyle{ S_1 }[/math])

      [math]\displaystyle{ S_2=[1,1,2,6,6]; \mu_2=3.2 }[/math](mean of [math]\displaystyle{ S_2 }[/math])

      [math]\displaystyle{ S_3=[1,6,5,2,4]; \mu_3=3.6 }[/math](mean of [math]\displaystyle{ S_3 }[/math])

      [math]\displaystyle{ S_{1000}=[1,1,4,6,6]; \mu_{1000}=3.6 }[/math](mean of [math]\displaystyle{ S_{1000} }[/math])

      By plotting all the sample means by keeping mean values ([math]\displaystyle{ \mu_i }[/math]) along x-axis and their frequencies along y-axis it can be observed that the sample distribution looks some what normal. Then if the size of the samples increases from [math]\displaystyle{ n=5 }[/math] to [math]\displaystyle{ n=20 }[/math], the distribution will be more close to a normal distribution. If the sample size n increases to 100, then the sample distribution will be even more closer to normal distribution then before two cases. So when the sample size [math]\displaystyle{ n\to\infty }[/math] the sampling distribution becomes a perfect normal distribution (mean, median and mode all are same). This is what called Central Limit Theorem.

  • Counter-Example(s):
  • See: Statistical Independence, Random Variate, Probability Distribution, Identically Distributed, Weak Convergence of Measures, Independent And Identically Distributed Random Variables, Attractor.


References

2014


2014

  1. David Williams, "Probability with martingales", Cambridge 1991/2008

2008