According to the article “Broken Science” published at Reason.com in January 2016, approximately 15 million researchers published more than 25 million scientific papers between 1996 and 2011.
That’s a lot of research, and much of it—according to multiple sources—is not reliable. Here are just two of the shocking facts from the article:
- In 2011, researchers at Bayer Healthcare could not replicate 43 of 67 published preclinical studies (64%) that they were relying on to develop cardiovascular and cancer treatments.
- The journal Science reported in August 2015 that only one-third of 100 psychological studies published in 3 leading psychology journals could be adequately replicated. That means 66% of the studies could not.
The article presents a number of opinions about why so much wrong science is being published:
- Publication bias – leading journals’ desire to publish positive and novel results
- Academic career model – a system that rewards production and striking results rather than scientific rigor, reproducibility, and transparency
- Self-interest – some scientists’ practices of running multiple statistical tests and reporting only the most positive result and of hypothesizing after the results are in to spin the data in a positive light
- Lack of statistical power – small sample sizes because of the tremendous cost required—in money and time—to test sufficient numbers of animals or humans
One proposed solution is to conduct peer review after an article is published. Three open-source platforms encourage researchers to post their projects and their data so that “the priorities in the peer review process would shift from assessing whether the manuscript should be published to whether the ideas should be taken seriously and how they can be improved,” according to University of Virginia psychologist Brian Nosek. Nosek is a cofounder of one of those platforms: the Open Science Framework.
The two other platforms mentioned in the article are arXive (for e-prints in physics, mathematics, computer science, quantitative biology, quantitative finance and statistics) and bioRxiv (the preprint server for biology).
The jury’s out on whether this approach will be embraced by a scientific community that is large enough and committed enough to make an appreciable difference in the quality of the work being published.