# Chi-Squared Goodness-Of-Fit Testing Task

A Chi-Squared Goodness-Of-Fit Testing Task is a goodness-of-fit testing task for categorical random variables that is a difference in proportions test based on a Chi-Squared distribution.

**AKA:**χ^{2}, CHI, Pearson's Chi Square Measure.**Context:**- It can (typically) assume that the data distribution follows the Chi Square Distribution.
- It produces a Chi-Square Statistic.
- It can (typically) assume that all cells have more than five observations.
- It can be a McNemar's Chi-Square Test (for correlated observations)

**Example(s):**- a Pearson's Chi-Squared Test.
- a Yates' Chi-Square Test, also known as Yates' correction for continuity
- a Cochran–Mantel–Haenszel Chi-Square Test.
- a Linear-by-linear association Chi-Square Test.
- a Portmanteau Test in time-series analysis (that tests for autocorrelation).

**Counter-Example(s):**- a Fisher’s Exact Test
- a Likelihood Ratio Test (?)
- an Odds-Ratio Test.
- a Chi-Square Independence Test.
- a Paired t-Test, such as an Independent Two-Sample t-Test.
- a (Normal) Z-Test.
- an ANOVA Goodness-Of-Fit Test (such as a One-way ANOVA) test for means (instead of proportion) with an F-Test.

**See:**CHAID Algorithm, Mutual Information Measure, Lack-of-Fit Sum of Squares, Sample Variance Distribution.

## References

### 2016

- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Chi-squared_test Retrieved:2016-5-17.
- A
**chi-squared test**, also referred to as**[math]\chi^2[/math] test**(or**chi-square test**), is any statistical hypothesis test in which the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true. Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can then be used to reject the null hypothesis that the data are independent.Also considered a chi-square test is a test in which this is

*asymptotically*true, meaning that the sampling distribution (if the null hypothesis is true) can be made to approximate a chi-square distribution as closely as desired by making the sample size large enough.The chi-squared test is used to determine whether there is a significant difference between the expected frequencies and the observed frequencies in one or more categories. Does the number of individuals or objects that fall in each category differ significantly from the number you would expect? Is this difference between the expected and observed due to sampling variation, or is it a real difference?

- A

### 2012

- (Wikipedia, 2012) ⇒ http://en.wikipedia.org/wiki/Chi-square_test
- … When mentioned without any modifiers or without other precluding context, this test is usually understood (for an exact test used in place of [math]\chi^2[/math], see Fisher's exact test).
- Yates' chi-square test, also known as Yates' correction for continuity
- Cochran–Mantel–Haenszel chi-square test.
- Linear-by-linear association chi-square test.
- The portmanteau test in time-series analysis, testing for the presence of autocorrelation
- Likelihood-ratio tests in general statistical modelling, for testing whether there is evidence of the need to move from a simple model to a more complicated one (where the simple model is nested within the complicated one).

- One case where the distribution of the test statistic is an exact chi-square distribution is the test that the variance of a normally-distributed population has a given value based on a sample variance. Such a test is uncommon in practice because values of variances to test against are seldom known exactly.

### 2008

- (Upton & Cook, 2008) ⇒ Graham Upton, and Ian Cook. (2008). “A Dictionary of Statistics, 2nd edition revised." Oxford University Press. ISBN:0199541450

### 2004

- http://www.quality-control-plan.com/StatGuide/sg_glos.htm
- QUOTE: The chi-square test for goodness of fit tests the hypothesis that the distribution of the population from which nominal data are drawn agrees with a posited distribution. The chi-square goodness-of-fit test compares observed and expected frequencies (counts). The chi-square test statistic is basically the sum of the squares of the differences between the observed and expected frequencies, with each squared difference divided by the corresponding expected frequency.