Reproducibility in Science

Many will find this scenario familiar: A groundbreaking study is published, causes ripples (even in mainstream media), and then it is contradicted a few years later. This seeming unreliability can be partially attributed to the nature of scientific discovery, the fact that as scientists continue to research and learn, new information about a substance or a procedure may come to light that changes the outcome. We once thought the world was flat and that the atom could not be split, and we now know we were wrong.

Another reason for scientific backtracking is the problem of replicability. An almost four-year project picked 100 studies from the field of psychological research and sought to replicate them to confirm the results. Only 39% of the results held up when the studies were repeated.

darts-856367_960_720

One possible explanation is the difficulty in re-creating the exact conditions of the original experiment. Another is the “file-drawer-problem,” the bias toward publishing the most positive or supportive results of an experiment that has been performed multiple times with less-than-consistent outcomes. Also, small sample sizes increase the likelihood that the reported statistics will be misleading. And finally, outright manipulation of conditions to bring about the desired outcome or intentional data manipulation also occur.

In an article published in Nature, 52% of surveyed scientists agreed that there is a significant “crisis” of reproducibility, but less than 31% said that failure to reproduce published results means that the result is probably wrong. Further, most respondents said that they still trust the published literature. In the same survey, more than 60% of scientists reported “pressure to publish and selective reporting” as the top reasons for replicability problems. Judith Kimble of the University of Wisconsin is quoted as saying, “Everyone is stretched thinner these days.”

Full acknowledgment of this pressure and the reproducibility problem could provide the foundation for rethinking the reliance on measures such as publication volume and journal impact factors in career evaluation and merit. Such rethinking could also extend to championing the open access publishing model where everyone can access information easily and for free.