Reproducibility redux…

The crisis of reproducibility in academic research is a troubling trend that deserves more scrutiny. I’ve blogged and written about this before, but as 2024 begins, it’s worth returning to the issue. Anecdotally, I’ve noticed that most of my scientist colleagues have experienced the inability to reproduce published results on at least one occasion. For a good review of the actual numbers, see here. Why are the findings from prestigious universities and journals seemingly so unreliable?

There are likely multiple drivers behind the reproducibility meme. Scientists face immense pressure to publish groundbreaking positive results. Null findings and replication studies are less likely to be accepted by high-impact journals. This incentivizes scientists to follow flashier leads before they are thoroughly vetted. Researchers must also chase funding, which increasingly goes to bold proposals touting novel discoveries over incremental confirmations. The high competition induces questionable practices to get an edge.

The institutional incentives seem to actively select against rigor and verification. But individual biases also contaminate research integrity. Remembering back to my postdoctoral experiences at NIH, it was clear even then that scientists get emotionally invested in their hypotheses and may unconsciously gloss over contrary signals. Or they may succumb to confirmation bias, doing experiments in ways that stack the deck to validate their assumptions. This risk seems to increase as the prominence of the researcher increases. It’s no surprise that findings thus tainted turn out to be statistical flukes unlikely to withstand outside scrutiny.

More transparency, data sharing, and independent audits of published research could quantify the scale of irreproducibility across disciplines. Meanwhile, funders and academics should alter incentives to emphasize rigor as much as innovation. Replication studies verifying high-profile results deserve higher status and support. Journals can demand stricter methodological reporting to catch questionable practices before publication. Until the institutions that produce, fund, publish and consume academic research value rigor and replication as much as novelty, the problem may persist. There seem to be deeper sociological and institutional drivers at play than any single solution can address overnight. But facing the depth of the reproducibility crisis is the first step.

Happy New Year!