Non-Parametric Statistical Test

From GM-RKB
(Redirected from nonparametric test)
Jump to navigation Jump to search

A Non-Parametric Statistical Test is a statistical hypothesis test whose test statistic is not based on a hypothesized population parameters.



References

2017a

  • (Changing Works, 2017) ⇒ Retrieved on 2017-05-07 from http://changingminds.org/explanations/research/analysis/parametric_non-parametric.htm Copyright: Changing Works 2002-2016
    • There are two types of test data and consequently different types of analysis. As the table below shows, parametric data has an underlying normal distribution which allows for more conclusions to be drawn as the shape can be mathematically described. Anything else is non-parametric.
Parametric Statistical Tests Non-Parametric Statistical Tests
Assumed distribution Normally Distributed Any
Assumed variance Homogeneous Any
Typical data Ratio or Interval Ordinal or Nominal
Data set relationships Independent Any
Usual central measure Mean Median
Benefits Can draw more conclusions Simplicity; Less affected by outliers

2017b

Parametric tests (means) Nonparametric tests (medians)
1-sample t test 1-sample Sign, 1-sample Wilcoxon
2-sample t test Mann-Whitney test
One-Way ANOVA Kruskal-Wallis, Mood’s median test
Factorial DOE with one factor and one blocking variable Friedman test

2017c

PARAMETRIC TEST NON-PARAMETRIC TEST
Independent Sample t Test Mann-Whitney test
Paired samples t test Wilcoxon signed Rank test
One way Analysis of Variance (ANOVA) Kruskal Wallis Test
One way repeated measures Analysis of Variance Friedman's ANOVA

2016A

2016b

  • (Encyclopedia of Math, 2016) ⇒ https://www.encyclopediaofmath.org/index.php/Non-parametric_test Retrieved:2016-9-11.
    • statistical test of a hypothesis [math]\displaystyle{ H_0:\; \theta\in\Theta_0\subset\Theta }[/math] against the alternative [math]\displaystyle{ H_1:\; \theta\in\Theta_1=\Theta\setminus\Theta_0 }[/math] when at least one of the two parameter sets [math]\displaystyle{ \Theta_0 }[/math] and [math]\displaystyle{ \Theta_1 }[/math] is not topologically equivalent to a subset of a Euclidean space. Apart from this definition there is also another, wider one, according to which a statistical test is called non-parametric if the statistical inferences obtained using it do not depend on the particular null-hypothesis probability distribution of the observable random variables on the basis of which one wants to test [math]\displaystyle{ H_0 }[/math] against [math]\displaystyle{ H_1 }[/math]. In this case, instead of the term "non-parametric test" one speaks frequently of a "distribution-free statistical test" . The Kolmogorov test is a classic example of a non-parametric test.

2016c

  • (Quality Control Plan (At-PQC),2016) ⇒ http://www.quality-control-plan.com/StatGuide/sg_glos.htm Retrieved 2016-10-12
    • QUOTE:Nonparametric tests are tests that do not make distributional assumptions, particularly the usual distributional assumptions of the normal-theory based tests. These include tests that do not involve population parameters at all (truly nonparametric tests such as the chi-square goodness of fit test), and distribution-free tests, whose validity does not depend on the population distribution(s) from which the data have been sampled. In particular, nonparametric tests usually drop the assumption that the data come from normally distributed populations. However, distribution-free tests generally do make some assumptions, such as equality of population variances.

2008

  • (Sasha & Wilson, 2008) ⇒ Dennis Shasha, and Manda Wilson. (2008). “Statistics is Easy!" Synthesis Lectures on Mathematics and Statistics. doi:10.2200/S00142ED1V01Y200807MAS001
    • Abstract: * Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling.

      This book explains the basic concepts of resampling, then systematically presents the standard statistical measures along with programs (in the language Python) to calculate them using resampling, and finally illustrates the use of the measures and programs in a case study. The text uses junior high school algebra and many examples to explain the concepts. The ideal reader has mastered at least elementary mathematics, likes to think procedurally, and is comfortable with computers.