# Pearson's Chi-Squared Test

Jump to navigation
Jump to search

A Pearson's Chi-Squared Test is a is a statistical hypothesis test based on a chi-squared statistic.

**Context:**- It can be used as a Goodness-of-Fit Test, Homogeneity Test or Independence Test.

**Example(s):****Counter-Example(s)****See:**Fisher's Exact Test.

## References

### 2011

- http://en.wikipedia.org/wiki/Pearson's_chi-squared_test
**Pearson's chi-squared test**([math]\displaystyle{ \chi^2 }[/math]) is the best-known of several chi-squared tests – statistical procedures whose results are evaluated by reference to the chi-squared distribution. Its properties were first investigated by Karl Pearson in 1900.^{[1]}In contexts where it is important to make a distinction between the test statistic and its distribution, names similar to**Pearson Χ-squared**test or statistic are used.It tests a null hypothesis stating that the frequency distribution of certain events observed in a sample is consistent with a particular theoretical distribution. The events considered must be mutually exclusive and have total probability 1. A common case for this is where the events each cover an outcome of a categorical variable. A simple example is the hypothesis that an ordinary six-sided die is "fair", i.e., all six outcomes are equally likely to occur.

Pearson's chi-squared is used to assess two types of comparison: tests of goodness of fit and tests of independence.

- A test of
**goodness of fit**establishes whether or not an observed frequency distribution differs from a theoretical distribution. - A
**test of independence**assesses whether paired observations on two variables, expressed in a contingency table, are independent of each other — for example, whether people from different regions differ in the frequency with which they report that they support a political candidate.

- A test of

- The first step in the chi-squared test is to calculate the chi-squared statistic. In order to avoid ambiguity, the value of the test-statistic is denoted by [math]\displaystyle{ \Chi^2 }[/math] rather than [math]\displaystyle{ \chi^2 }[/math] (which is either an uppercase chi instead of lowercase, or an upper case roman
*X*); this also serves as a reminder that the distribution of the test statistic is not exactly that of a chi-squared random variable. However some authors do use the [math]\displaystyle{ \chi^2 }[/math] notation for the test statistic. An exact test which does not rely on using the approximate [math]\displaystyle{ \chi^2 }[/math] distribution is Fisher's exact test: this is substantially more accurate in evaluating the significance level of the test, especially with small numbers of observations. The chi-squared statistic is calculated by finding the difference between each observed and theoretical frequency for each possible outcome, squaring them, dividing each by the theoretical frequency, and taking the sum of the results. A second important part of determining the test statistic is to define the degrees of freedom of the test: this is essentially the number of observed frequencies adjusted for the effect of using some of those observations to define the theoretical frequencies.

- The first step in the chi-squared test is to calculate the chi-squared statistic. In order to avoid ambiguity, the value of the test-statistic is denoted by [math]\displaystyle{ \Chi^2 }[/math] rather than [math]\displaystyle{ \chi^2 }[/math] (which is either an uppercase chi instead of lowercase, or an upper case roman

- ↑ Karl Pearson (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling".
*Philosophical Magazine, Series 5***50**(302): 157–175. doi:10.1080/14786440009463897.

### 1992

- (Pearson, 1992) ⇒ Pearson, Karl. “On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sampling." Breakthroughs in Statistics. Springer New York, 1992. 11-28.doi:10.1080/14786440009463897
- Let [math]\displaystyle{ x_1, x_2 … x_n }[/math] be a system of deviations from the means of [math]\displaystyle{ n }[/math] variables with standard deviations [math]\displaystyle{ \sigma_1, \sigma_2 … \sigma }[/math] [math]\displaystyle{ n }[/math] and with correlations [math]\displaystyle{ r_{12}, r_{13}, r_{23} … r_{n −1,n} }[/math] (...)