# Cramer-Von Mises Test

A Cramer-Von Mises Test is a hypothesis test that compares the goodness-of-fit of a cumulative distribution function to an empirical distribution function.

**AKA:**Cramer-Von Mises Criterion.**See:**Hypothesis Test, Kolmogorov-Smirnov test, Anderson–Darling test, Shapiro–Wilk test.

## References

### 2016

- (Wikipedia, 2016) ⇒ https://www.wikiwand.com/en/Cram%C3%A9r%E2%80%93von_Mises_criterion Retrieved 2016-07-30
- In statistics the
**Cramér–von Mises criterion**is a criterion used for judging the goodness of fit of a cumulative distribution function [math]F^*[/math] compared to a given empirical distribution function [math]F_n[/math], or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as

- In statistics the

- [math]\omega^2 = \int_{-\infty}^{\infty} [F_n(x)-F^*(x)]^2\,\mathrm{d}F^*(x)[/math]
- In one-sample applications [math]F^*[/math] is the theoretical distribution and [math]F_n[/math] is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case. The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930.

### 1962

- (Anderson, 1962) ⇒ Anderson, T. W. (1962). On the distribution of the two-sample Cramer-von Mises criterion. The Annals of Mathematical Statistics, 1148-1159. http://projecteuclid.org/euclid.aoms/1177704477
- The Cramer-von Mises [math]\omega^2[/math] criterion for testing that a sample, [math]x_1, \cdots,x_N[/math], has been drawn from a specified continuous distribution [math]F(x)[/math] is

- [math]\omega^2=\in_{-\infty}^{\infty}[F_N(x)−F(x)]^2\,dF(x) \quad(1) [/math]
- where [math]F_N(x)[/math] is the empirical distribution function of the sample; that is, [math]FN(x)=k/N[/math] if exactly [math]k[/math] observations are less than or equal to [math]x(k=0,1,⋯,N)[/math]. If there is a second sample, [math]y_1,\cdots,y_M [/math] a test of the hypothesis that the two samples come from the same (unspecified) continuous distribution can be based on the analogue of [math]N\omega^2[/math], namely

- [math]T=[NM/(N+M)]\int^\infty_{−\infty}[F_N(x)−G_M(x)]^2dH_{N+M}(x), \quad (2) [/math]
- where [math]GM(x)[/math] is the empirical distribution function of the second sample and [math]H_{N+M(x)}[/math] is the empirical distribution function of the two samples together [that is, [math](N+M)H_{N+M}(x)=NF_N(x)+MG_M(x)][/math]. The limiting distribution of [math]N\omega^2[/math] as [math]N\rightarrow \infty[/math] has been tabulated [2], and it has been shown ([3], [4a], and [7]) that TT has the same limiting distribution as [math]N\rightarrow \infty[/math], [math]M\rightarrow \infty[/math], and [math] N/M\rightarrow \lambda[/math], where [math]\lambda[/math] is any finite positive constant. In this note we consider the distribution of [math]T[/math] for small values of [math]N[/math] and [math]M[/math] and present tables to permit use of the criterion at some conventional significance levels for small values of [math]N[/math] and [math]M[/math] . The limiting distribution seems a surprisingly good approximation to the exact distribution for moderate sample sizes (corresponding to the same feature for [math]N\omega^2[/math] [6]). The accuracy of approximation is better than in the case of the two-sample Kolmogorov-Smirnov statistic studied by Hodges [4].

### 1928

- (Cramer, 1928) ⇒ Cramér, H. (1928). On the composition of elementary errors. Almqvist & Wiksells. DOI: http://dx.doi.org/10.1080/03461238.1928.10416862