Reproducible Research

From GM-RKB
Jump to navigation Jump to search

A Reproducible Research is an Academic Research that can be used to reproduce the results and create new research methodology based on it.



References

2019

  • (Wikipedia, 2019) ⇒ https://en.wikipedia.org/wiki/Reproducibility#Reproducible_research Retrieved:2019-10-19.
    • The term reproducible research refers to the idea that the ultimate product of academic research is the paper along with the laboratory notebooks and full computational environment used to produce the results in the paper such as the code, data, etc. that can be used to reproduce the results and create new work based on the research [1] [2] [3][4][5]. Typical examples of reproducible research comprise compendia of data, code and text files, often organised around an R Markdown source document or a Jupyter notebook. Psychology has seen a renewal of internal concerns about irreproducible results. Researchers showed in a 2006 study that, of 141 authors of a publication from the American Psychology Association (APA) empirical articles, 103 (73%) did not respond with their data over a 6-month period. In a follow up study published in 2015, it was found that 246 out of 394 contacted authors of papers in APA journals did not share their data upon request (62%). In a 2012 paper, it was suggested that researchers should publish data along with their works, and a dataset was released alongside as a demonstration, in 2017 it was suggested in an article published in Scientific Data that this may not be sufficient and that the whole analysis context should be disclosed. In 2015, Psychology became the first discipline to conduct and publish an open, registered empirical study of reproducibility called the Reproducibility Project. 270 researchers from around the world collaborated to replicate 100 empirical studies from three top Psychology journals. Fewer than half of the attempted replications were successful. There have been initiatives to improve reporting and hence reproducibility in the medical literature for many years, which began with the CONSORT initiative, which is now part of a wider initiative, the EQUATOR Network. This group has recently turned its attention to how better reporting might reduce waste in research, especially biomedical research. Reproducible research is key to new discoveries in pharmacology. A Phase I discovery will be followed by Phase II reproductions as a drug develops towards commercial production. In recent decades Phase II success has fallen from 28% to 18%. A 2011 study found that 65% of medical studies were inconsistent when re-tested, and only 6% were completely reproducible. In 2012, a study by Begley and Ellis was published in Nature that reviewed a decade of research. That study found that 47 out of 53 medical research papers focused on cancer research were irreproducible. The irreproducible studies had a number of features in common, including that studies were not performed by investigators blinded to the experimental versus the control arms, there was a failure to repeat experiments, a lack of positive and negative controls, failure to show all the data, inappropriate use of statistical tests and use of reagents that were not appropriately validated. John P. A. Ioannidis writes, "While currently there is unilateral emphasis on 'first' discoveries, there should be as much emphasis on replication of discoveries." [6] The Nature study was itself reproduced in the journal PLOS ONE, which confirmed that a majority of cancer researchers surveyed had been unable to reproduce a result. In 2016, Nature conducted a survey of 1,576 researchers who took a brief online questionnaire on reproducibility in research. According to the survey, more than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments. "Although 52% of those surveyed agree there is a significant 'crisis' of reproducibility, less than 31% think failure to reproduce published results means the result is probably wrong, and most say they still trust the published literature." A 2018 study published in the journal PLOS ONE found that 14.4% of a sample of public health researchers had shared their data or code or both and a related study found limited reproducibility in six public health studies. Data and statistical code sharing, along with following statistical coding guidelines, are recommended practices for improving reproducibility.
  1. Fomel, Sergey; Claerbout, Jon (2009). "Guest Editors' Introduction: Reproducible Research". Computing in Science and Engineering. 11 (1): 5–7. Bibcode:2009CSE....11a...5F. doi:10.1109/MCSE.2009.14.
  2. Buckheit, Jonathan B.; Donoho, David L. (May 1995). WaveLab and Reproducible Research (PDF) (Report). California, United States: Stanford University, Department of Statistics. Technical Report No. 474. Retrieved 5 January 2015.
  3. "The Yale Law School Round Table on Data and Core Sharing: "Reproducible Research"". Computing in Science and Engineering. 12 (5): 8–12. 2010. doi:10.1109/MCSE.2010.113.
  4. Marwick, Ben (2016). "Computational reproducibility in archaeological research: Basic principles and a case study of their implementation". Journal of Archaeological Method and Theory. 24 (2): 424–450. doi:10.1007/s10816-015-9272-9.
  5. Goodman, Steven N.; Fanelli, Daniele; Ioannidis, John P. A. (1 June 2016). "What does research reproducibility mean?". Science Translational Medicine. 8 (341): 341ps12. doi:10.1126/scitranslmed.aaf5027. PMID 27252173.
  6. Is the spirit of Piltdown man alive and well?