
Also published on my newsletter
The replication crisis isn’t a mystery. After presiding over the review for thousands of grants at NSF’s Biological Sciences Directorate, I can tell you exactly why science struggles to reproduce its own findings: we built incentives that reward novelty and punish verification.
A 2016 Nature survey found that over 70% of scientists have failed to reproduce another researcher’s experiments. But this isn’t about sloppy science or bad actors. It’s straightforward economics.
Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.
The Researcher’s Optimization Problem
You have limited time and resources. You can either:
- Pursue novel findings → potential Nature paper, grant funding, tenure
- Replicate someone’s work → maybe a minor publication, minimal funding, colleagues questioning your creativity
The expected value calculation is obvious. Replication is a public good with privatized costs.
How NSF Review Panels Work
At NSF, I watched this play out in every review panel. Proposals to replicate existing work faced an uphill battle. Reviewers—themselves successful researchers who got there by publishing novel findings—naturally favor creative, untested ideas over verification work.
We tried various fixes. Some programs explicitly funded replication studies. Some review criteria emphasized robustness over novelty. But the core incentive remained: breakthrough science gets you the next grant; careful verification doesn’t.
The problem runs deeper than any single agency. Universities want prestigious publications. Journals want citations. Researchers want tenure. Nobody’s optimization function includes “produces reliable knowledge that someone else can build on.”
The Information Market Is Broken
Even when researchers try to replicate, they’re working with incomplete information. Methods sections in papers are sanitized versions of what actually happened in the lab. “Cells were cultured under standard conditions” means something different in every lab. One researcher’s gentle mixing is another’s vigorous shaking.
This information asymmetry makes replication attempts inherently inefficient. You’re trying to reproduce a result while missing critical details that the original researcher might not even realize mattered.
The Time Horizon Problem
NSF grants run 3-5 years. Tenure clocks run 6-7 years. But scientific truth emerges over decades. We’re optimizing for the wrong timescale.
During my time at NSF, I saw brilliant researchers make pragmatic choices: publish something surprising now (even if it might not hold up) rather than spend two years carefully verifying it. That’s not a moral failing—it’s responding rationally to the incentives we created.
What Would Actually Fix This
Make replication profitable:
- Count verification studies equally with novel findings in grant review and tenure decisions
- Fund researchers whose job is rigorous replication—make it a legitimate career path
- Require data and detailed methods sharing as a funding condition, not an afterthought
- Make failed replications as publishable as successful ones
The challenge isn’t technical. It’s institutional. We designed a market that overproduces flashy results and underproduces reliable knowledge. Until we fix the incentives, we’ll keep getting exactly what we’re paying for.