Observational Study

From GM-RKB
(Redirected from Observational studi)
Jump to navigation Jump to search

An Observational Study is a designed study that does not control what treatment is given and that involves passive data collection.



References

2015


2013

  • http://en.wikipedia.org/wiki/Experiment#Contrast_with_observational_study
    • An observational study is used when it is impractical, unethical, cost-prohibitive (or otherwise inefficient) to fit a physical or social system into a laboratory setting, to completely control confounding factors, or to apply random assignment. It can also be used when confounding factors are either limited or known well enough to analyze the data in light of them (though this may be rare when social phenomena are under examination). In order for an observational science to be valid, confounding factors must be known and accounted for. In these situations, observational studies have value because they often suggest hypotheses that can be tested with randomized experiments or by collecting fresh data.

      Fundamentally, however, observational studies are not experiments. By definition, observational studies lack the manipulation required for Baconian experiments. In addition, observational studies (e.g., in biological or social systems) often involve variables that are difficult to quantify or control. Observational studies are limited because they lack the statistical properties of randomized experiments. In a randomized experiment, the method of randomization specified in the experimental protocol guides the statistical analysis, which is usually specified also by the experimental protocol.[1] Without a statistical model that reflects an objective randomization, the statistical analysis relies on a subjective model.[1] Inferences from subjective models are unreliable in theory and practice.[2] In fact, there are several cases where carefully conducted observational studies consistently give wrong results, that is, where the results of the observational studies are inconsistent and also differ from the results of experiments. For example, epidemiological studies of colon cancer consistently show beneficial correlations with broccoli consumption, while experiments find no benefit.[3]

      A particular problem with observational studies involving human subjects is the great difficulty attaining fair comparisons between treatments (or exposures), because such studies are prone to selection bias, and groups receiving different treatments (exposures) may differ greatly according to their covariates (age, height, weight, medications, exercise, nutritional status, ethnicity, family medical history, etc.). In contrast, randomization implies that for each covariate, the mean for each group is expected to be the same. For any randomized trial, some variation from the mean is expected, of course, but the randomization ensures that the experimental groups have mean values that are close, due to the central limit theorem and Markov's inequality. With inadequate randomization or low sample size, the systematic variation in covariates between the treatment groups (or exposure groups) makes it difficult to separate the effect of the treatment (exposure) from the effects of the other covariates, most of which have not been measured. The mathematical models used to analyze such data must consider each differing covariate (if measured), and the results will not be meaningful if a covariate is neither randomized nor included in the model.

  1. 1.0 1.1 *Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9. http://books.google.com/books?id=T3wWj2kVYZgC&printsec=frontcover&cad=4_0. 
  2. David A. Freedman, R. Pisani, and R. A. Purves. Statistics, 4th edition (W.W. Norton & Company, 2007) [1] ISBN 978-0-393-92972-0
  3. David A. Freedman (2009) Statistical Models: Theory and Practice, Second edition, (Cambridge University Press) [2] ISBN 978-0-521-74385-3


  • (Wikipedia, 2013) ⇒ http://en.wikipedia.org/wiki/Post-hoc_analysis
    • In the design and analysis of experiments, post-hoc analysis (from Latin post hoc, “after this”) consists of looking at the data — after the experiment has concluded — for patterns that were not specified a priori. It is sometimes called by critics data dredging to evoke the sense that the more one looks the more likely something will be found. More subtly, each time a pattern in the data is considered, a statistical test is effectively performed. This greatly inflates the total number of statistical tests and necessitates the use of multiple testing procedures to compensate. However, this is difficult to do precisely and in fact most results of post-hoc analyses are reported as they are with unadjusted p-values. These p-values must be interpreted in light of the fact that they are a small and selected subset of a potentially large group of p-values. Results of post-hoc analyses should be explicitly labeled as such in reports and publications to avoid misleading readers.

      In practice, post-hoc analyses are usually concerned with finding patterns and/or relationships between subgroups of sampled populations that would otherwise remain undetected and undiscovered were a scientific community to rely strictly upon a priori statistical methods.[citation needed] Post-hoc tests — also known as a posteriori tests — greatly expand the range and capability of methods that can be applied in exploratory research. Post-hoc examination strengthens induction by limiting the probability that significant effects will seem to have been discovered between subgroups of a population when none actually exist. As it is, many scientific papers are published without adequate, preventative post-hoc control of the Type I Error Rate.[1]

      Post-hoc analysis is an important procedure without which multivariate hypothesis testing would greatly suffer, rendering the chances of discovering false positives unacceptably high. Ultimately, post-hoc testing creates better informed scientists who can therefore formulate better, more efficient a priori hypotheses and research designs.

2012

2011

2002