Parametric Hypothesis Test

From GM-RKB
(Redirected from parametric test)
Jump to navigation Jump to search

A Parametric Hypothesis Test is a statistical hypothesis test which assumes that the sampling distribution is known and depends on a fixed set of parameters (including the hypothesized population parameter and standard deviations).



References

2017a

  • (Changing Works, 2017) ⇒ Retrieved on 2017-05-07 from http://changingminds.org/explanations/research/analysis/parametric_non-parametric.htm Copyright: Changing Works 2002-2016
    • There are two types of test data and consequently different types of analysis. As the table below shows, parametric data has an underlying normal distribution which allows for more conclusions to be drawn as the shape can be mathematically described. Anything else is non-parametric.
Parametric Statistical Tests Non-Parametric Statistical Tests
Assumed distribution Normally Distributed Any
Assumed variance Homogeneous Any
Typical data Ratio or Interval Ordinal or Nominal
Data set relationships Independent Any
Usual central measure Mean Median
Benefits Can draw more conclusions Simplicity; Less affected by outliers

2017b

Parametric tests (means) Nonparametric tests (medians)
1-sample t test 1-sample Sign, 1-sample Wilcoxon
2-sample t test Mann-Whitney test
One-Way ANOVA Kruskal-Wallis, Mood’s median test
Factorial DOE with one factor and one blocking variable Friedman test

2017c

PARAMETRIC TEST NON-PARAMETRIC TEST
Independent Sample t Test Mann-Whitney test
Paired samples t test Wilcoxon signed Rank test
One way Analysis of Variance (ANOVA) Kruskal Wallis Test
One way repeated measures Analysis of Variance Friedman's ANOVA

2016

  • (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Parametric_statistics Retrieved:2016-5-24.
    • Parametric statistics is a branch of statistics which assumed that sample data comes from a population that follows a probability distribution based on a fixed set of parameters. Most well-known elementary statistical methods are parametricConversely a non-parametric model differs precisely in that the parameter set (or feature set in machine learning) is not fixed and can increase, or even decrease if new relevant information is collected. A parametric model as it relies on a fixed parameter set assumes more about a given population than non-parametric methods. When the assumptions are correct, parametric methods will produce more accurate and precise estimates than non-parametric methods, i.e. have more statistical power. As more is assumed when the assumptions are not correct they have a greater chance of failing, and for this reason are not a robust statistical method. On the other hand, parametric formulae are often simpler to write down and faster to compute. For this reason their simplicity can make up for their lack of robustness, especially if care is taken to examine diagnostic statistics.