Why Transformational Science Can’t Get Funded: The Einstein Problem

Proposal declined. Insufficient institutional support. No preliminary data. Applicant lacks relevant expertise—they work in a patent office, not a research laboratory. The proposed research is too speculative and challenges well-established physical laws without adequate justification. The principal investigator is 26 years old and has no prior experience in physics.

This would have been the fate of Albert Einstein in 1905, had the NSF existed as it does today. Even with grant calls requesting ‘transformative ideas,’ an Einstein proposal would have been rejected outright. And yet, that year 1905 has been called Einstein’s miracle year. Yes, he was a patent clerk working in Bern, Switzerland, without a university affiliation. He had neither access to a laboratory nor equipment. He worked in isolation on evenings and weekends and was unknown in the physics community. Yet, despite those disadvantages, he produced four revolutionary papers on the Photoelectric Effect, Brownian motion, Special Relativity, and the famous E=mc2 energy-mass equivalence.

Taken as a whole, the work was purely theoretical. There were no preliminary data. The papers challenged fundamental assumptions of the field and, as such, were highly speculative and definitively high-risk. There were no broader impacts because there were no immediate practical applications. And the work was inherently multidisciplinary, bridging mechanics, optics, and thermodynamics. Yet, the work was transformative. By modern grant standards, Einstein’s work failed every criterion.

The Modern Grant Application – A Thought Experiment

Let’s imagine Einstein’s 1905 work packaged as a current NSF proposal. What would it look like, and how would it fare in peer review?

Einstein’s Hypothetical NSF Proposal

Project Title: Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light

Principal Investigator: Albert Einstein, Technical Expert Third Class, Swiss Federal Patent Office

Institution: None (individual applicant)

Requested Duration: 3 years

Budget: $150,000 (minimal – just salary support and travel to one conference)

Project Summary

This proposal challenges the fundamental assumptions underlying Newtonian mechanics and Maxwell’s electromagnetic theory. I propose that space and time are not absolute but relative, dependent on the observer’s state of motion. This requires abandoning the concept of the luminiferous ether and reconceptualizing the relationship between matter and energy. The work will be entirely theoretical, relying on thought experiments and mathematical derivation to establish a new framework for understanding physical reality.

How NSF Review Panels Would Evaluate This

Intellectual Merit: Poor

Criterion: Does the proposed activity advance knowledge and understanding?

Panel Assessment: The proposal makes extraordinary claims without adequate preliminary data. The applicant asserts that Newtonian mechanics—the foundation of physics for over 200 years—requires fundamental revision yet provides no experimental evidence supporting this radical departure.

Specific Concerns:

Lack of Preliminary Results: The proposal contains no preliminary data demonstrating the feasibility of the approach. There are no prior publications by the applicant in peer-reviewed physics journals. The applicant references his own unpublished manuscripts, which cannot be evaluated.

Methodology Insufficient: The proposed “thought experiments” do not constitute rigorous scientific methodology. How will hypotheses be tested? What experimental validation is planned? The proposal describes mathematical derivations but provides no pathway to empirical verification. Without experimental confirmation, these remain untestable speculations.

Contradicts Established Science: The proposal challenges Newton’s laws of motion and the existence of the luminiferous ether—concepts supported by centuries of successful physics. While scientific progress requires questioning assumptions, such fundamental challenges require extraordinary evidence. The applicant provides none.

Lack of Expertise: The PI works at a patent office and has no formal research position. He has no advisor supporting this work, no collaborators at research institutions, and no track record in theoretical physics. His biosketch lists a doctorate from the University of Zurich but no subsequent research appointments or publications in relevant areas.

Representative Reviewer Comments:

Reviewer 1: “While the mathematical treatment shows some sophistication, the fundamental premise—that simultaneity is relative—contradicts basic physical intuition and has no experimental support. The proposal reads more like philosophy than physics.”

Reviewer 2: “The applicant’s treatment of the photoelectric effect proposes that light behaves as discrete particles, directly contradicting Maxwell’s well-established wave theory. This is not innovation; it’s contradiction without justification.”

Reviewer 3: “I appreciate the applicant’s ambition, but this proposal is not ready for funding. I recommend the PI establish himself at a research institution, publish preliminary findings, and gather experimental evidence before requesting support for such speculative work. Perhaps a collaboration with experimentalists at a major university would strengthen future submissions.”

Broader Impacts: Very Poor

Criterion: Does the proposed activity benefit society and achieve specific societal outcomes?

Panel Assessment: The proposal fails to articulate any concrete broader impacts. The work is purely theoretical with no clear pathway to societal benefit.

Specific Concerns:

No Clear Applications: The proposal does not explain how reconceptualizing space and time would benefit society. What problems would this solve? What technologies would it enable? The PI suggests the work is “fundamental” but provides no examples of potential applications.

No Educational Component: There is no plan for training students or postdocs. The PI works alone at a patent office, with no access to students and no institutional infrastructure for education and training.

No Outreach Plan: The proposal includes no activities to communicate findings to the public or policymakers. There is no plan for broader dissemination beyond potential publication in physics journals.

Questionable Impact Timeline: Even if the proposed theories are correct, the proposal provides no timeline for practical applications. How long until these ideas translate into societal benefit? The proposal is silent on this critical question.

Representative Reviewer Comments:

Reviewer 1: “The broader impacts section is essentially non-existent. The PI states that ‘fundamental understanding of nature has intrinsic value,’ but this does not meet NSF’s requirement for concrete societal outcomes.”

Reviewer 2: “I cannot envision how this work, even if successful, would lead to practical applications within a reasonable timeframe. The proposal needs to articulate a clear pathway from theory to impact.”

Reviewer 3: “NSF has limited resources and must prioritize research with demonstrable benefits to society. This proposal does not make that case.”

Panel Summary and Recommendation

Intellectual Merit Rating: Poor
Broader Impacts Rating: Very Poor

Overall Assessment: While the panel appreciates the PI’s creativity and mathematical ability, the proposal is highly speculative, lacks preliminary data, contradicts established physical laws without sufficient justification, and fails to articulate broader impacts. The PI’s lack of institutional affiliation and research track record raises concerns about feasibility.

The panel notes that the PI appears talented and encourages resubmission after:

  1. Establishing an independent position at a research institution
  2. Publishing preliminary findings in peer-reviewed journals
  3. Developing collaborations with experimental physicists
  4. Articulating a clearer pathway to practical applications
  5. Demonstrating broader impacts through education and outreach

Recommendation: Decline

Panel Consensus: Not competitive for funding in the current cycle. The proposal would need substantial revision and preliminary results before it could be considered favorably.

The Summary Statement Einstein Would Receive

Dear Dr. Einstein,

Thank you for your submission to the National Science Foundation. Unfortunately, your proposal, “Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light,” was not recommended for funding.

The panel recognized your ambition and mathematical capabilities but identified several concerns that prevented a favorable recommendation:

– Lack of preliminary data supporting the feasibility of your approach – Insufficient experimental validation of your theoretical claims
– Absence of institutional support and research infrastructure – Inadequate articulation of broader impacts and societal benefits

We encourage you to address these concerns and consider resubmission in a future cycle. You may wish to establish collaborations with experimentalists and develop a clearer pathway from theory to application.

We appreciate your interest in NSF funding and wish you success in your future endeavors.

Sincerely,
NSF Program Officer

And that would be it. Einstein’s miracle year—four papers that transformed physics and laid the groundwork for quantum mechanics, nuclear energy, GPS satellites, and our modern understanding of the cosmos—would have died in peer review, never funded, never attempted.

The system would have protected us from wasting taxpayer dollars on such speculation. It would have worked exactly as designed.

The Preliminary Data Paradox

The contemporary scientific grant review process implicitly expects foundational work in transformative science to present preliminary data, despite knowing that truly groundbreaking ideas often do not originate from such tangible evidence but instead evolves through thought experiments and mathematical derivations, as Einstein did. This unrealistic expectation stifles innovation at its core – the process essentially forces researchers like Einstein to abandon pure theoretical exploration and confine them to a narrow experimental framework, where they cannot freely challenge existing paradigms, even when their work holds no immediate empirical validation yet promises to revolutionize our understanding fundamentally.

The Risk-Aversion Problem

Often, in grant reviews, I see a very junior reviewer criticize work as being too risky—dooming the proposal to failure—while simultaneously sensing their admiration for the promise and transformative nature of the work. The conservative nature and risk-averse mentality of modern grant review panels are deeply rooted in the scientific community’s culture that values incremental advances over speculative leaps – a bias born from career motivations wherein funding decisions can make or break one’s professional trajectory. Reviewers often exhibit reluctance to invest support into proposals like Einstein’s, as they pose potential controversy and may not align with personal research interests due to the associated risks of failure – a reflection of how science has traditionally evolved through evolutionary rather than revolutionary processes within academic institutions.

The Credentials Catch-22

To secure funding in today’s scientific landscape, one often needs institutional affiliation and an impressive publication record that reflects strong research credentials – a catch-22 scenario wherein groundbreaking innovators with no formal backing or prior experience find it challenging to gain the trust of reviewers. This requirement discriminates against fresh perspectives from individuals such as Einstein, who was working outside established institutions and did not have access to mentorship, which is typically deemed necessary for academic recognition – a stark contrast in how transformative outsider thinkers with unconventional backgrounds historically nurtured science.

The Short-Term Timeline Problem

Einstein developed special relativity over years with no milestones, no quarterly reports, no renewals. How would he answer, ‘What will you accomplish in Year 2?” The funding cycle durations set forth by major grant agencies, such as NSF’s typical three to five years for regular grants and the NIH’s maximum of five years, do not accommodate the long periods necessary for fully developing foundational theories that require time-intensive evolution. Such timelines impose an unfair constraint on researchers like Einstein, whose transformative ideas did not evolve within strict milestones but unfolded in an unconstrained fashion – showcasing the incompatibility of this model with truly revolutionary scientific discoveries where a linear progression is unrealistic and even counterproductive.

The Impact Statement Trap

Requirements for demonstrating immediate “broader impacts” or societal benefits pose significant obstacles to transformative research proposals that often envision far-reaching implications beyond their direct applications – an aspect Einstein’s work exemplifies best with its foundational role in advancing physics. The trap lies when reviewers, fearing potential misuse of speculative science or unable to perceive future benefits due to cognitive biases, force research proposals into a mold where immediate practical impact takes precedence over visionary scientific contributions, further marginalizing transformative studies that could potentially unlock new dimensions in various fields.

The Interdisciplinary Gap

The inherent disciplinarity of current grant funding schemes disconnects them from the interdisciplinary essence required for revolutionary research proposals like Einstein’s – a reality where transformative work frequently transcends conventional academic boundaries by merging concepts across multiple fields. This approach often results in an exclusion not only based on institutional affiliation but also because of its challenge to compartmentalized funding models that struggle with the non-linear, cross-disciplinary nature integral to truly transformative science – a significant obstacle for proposals inherently interdisciplinary yet unable to fit neatly within program structures or expertise.

The hypothetical funding scenarios for transformational science, as presented through the lens of Albert Einstein’s groundbreaking work, illustrate the inherent challenges faced by revolutionary ideas. To further highlight this problem, let’s take a look at other seminal discoveries that may have been overlooked or deemed unworthy of support under current grant review criteria:

Copernicus’ Heliocentric Model: In a contemporary setting, Copernicus’ heliocentric model might face skepticism due to its challenge to the widely accepted geocentric view of the universe. Lacking preliminary data and facing resistance from established religious beliefs, his proposal would likely be rejected under modern grant review criteria, despite its ultimate validation through observation and mathematical proof.

Gregor Mendel’s Pea Plant Experiments: The foundation of modern genetics was laid by Mendel’s pea plant experiments, yet his work remained largely unnoticed for decades after its initial publication. A grant reviewer in 1863 would likely have dismissed Mendel’s findings as too speculative and without immediate practical applications, thereby overlooking the fundamental insights he provided about heredity and genetic inheritance.

mRNA Vaccines: Katalin Karikó spent decades struggling to fund mRNA therapeutic research. Too risky. Too speculative. No clear applications. Penn demoted her. NIH rejected her grants. Reviewers wanted proof that mRNA could work as a therapeutic platform, but without funding, she couldn’t generate that proof. Then COVID-19 hit, and mRNA vaccines saved millions of lives. The technology that couldn’t get funded became one of the most important medical breakthroughs of the century.

Why does all of this matter now? First, the evidence is mounting that American science is at an inflection point. The rate of truly disruptive discoveries—those that reshape fields rather than incrementally advance them—has been declining for decades, even as scientific output has grown. Both NSF and NIH leadership recognize this troubling trend.

This innovation crisis manifests in the problems we cannot solve. Cancer and Alzheimer’s have resisted decades of intensive research. AI alignment and safety remain fundamentally unsolved as we deploy increasingly powerful systems. We haven’t returned to the moon in over 50 years. In my own field of neuroscience, incremental progress has failed to produce treatments for the diseases that devastate millions of families.

These failures point to a deeper problem: we’ve optimized our funding system for incremental advances, not transformational breakthroughs. Making matters worse, we’re losing ground internationally. China’s funding models allow longer timelines and embrace higher risk. European ERC grants support more adventurous research. Many of our best researchers now weigh opportunities overseas or in industry, where they can pursue riskier ideas with greater freedom.

What Needs to Change

Fixing this requires fundamental changes at multiple levels—from how we structure programs to how we evaluate proposals to how we support unconventional researchers.

Create separate funding streams for high-risk research. NSF and NIH need more programs that emulate DARPA’s high-risk, high-reward model. These programs should be insulated from traditional grant review: no preliminary data required, longer timelines (10+ years), and peer review conducted by scientists who have themselves taken major risks and succeeded. I propose that 10 percent of each agency’s budget be set aside for “Einstein Grants”—awards that take the view opposite the status quo. Judge proposals on originality and potential impact, not feasibility and preliminary data. Accept that most will fail, but the few that succeed will be transformational.

Protect exploratory research within traditional programs. Even standard grant programs should allow pivots when researchers discover unexpected directions. We should fund people with track records of insight, not just projects with detailed timelines. Judge proposals on the quality of thinking, not the completeness of deliverables.

Reform peer review processes. The current system needs three critical changes. First, separate review tracks for incremental versus transformational proposals—they require fundamentally different evaluation criteria. Second, don’t let a single negative review kill bold ideas; if three reviewers are enthusiastic and one is skeptical, fund it. Third, value originality over feasibility. The most transformational ideas often sound impossible until someone proves otherwise.

Support alternative career paths. We should fund more researchers outside traditional academic institutions and recognize that the best science doesn’t always emerge from R1 universities. Explicitly value interdisciplinary training and create flexible career paths that don’t punish researchers who take time to develop unconventional ideas. Track where our most creative researchers go when they leave academia—if we’re consistently losing them to industry or foreign institutions, that’s a failure signal we must heed.

Acknowledge the challenge ahead. These reforms require sustained political will across multiple administrations and consistent support from Congress. They demand patience—accepting that transformational breakthroughs can’t be scheduled or guaranteed. But the alternative is clear: we continue optimizing for incremental progress while the fundamental problems remain unsolved and our international competitors embrace the risk we’ve abandoned.

The choice before us is stark. We can optimize the current system for productivity—incremental papers, measurable progress—or we can create space for transformative discovery. We cannot have both with the same funding mechanisms.

The cost of inaction is clear: we will miss the next Einstein, fall further behind in fundamental discovery, watch science become a bureaucratic exercise, and lose what made American science into a powerhouse of discovery.

This requires action at every level. Scientists must advocate for reform and be willing to champion risky proposals. Program officers must have the courage to fund work that reviewers call too speculative. Policymakers must create new funding models and resist the temptation to demand near-term results. The public must understand that breakthrough science looks different from incremental progress—it’s messy, unpredictable, and often wrong before it’s right.

In 1905, Einstein changed our understanding of the universe while working in a patent office with no grant funding. Today, our funding system would never have let him try. We need to fix that.

“What Grant Reviewers Actually Look For (and What They Ignore)”

A close colleague of mine at a major US research university begins the process of preparing a grant proposal by creating something he calls a “storyboard”.  When I was growing up in LA, the concept of a storyboard was very familiar to me.  Many of my high school friends, at the time, aspired to careers in the locally dominant entertainment industry. The storyboard, invented by Walt Disney, used pictures to visualize a movie’s plot flow before production—often even before a screenplay was complete. In the LA movie business, you could look at a storyboard and pretty much get right away what a movie is about.

Back to the colleague of mine who uses storyboard to create grant proposals—his key idea is that you’re done making the storyboard, when someone outside the group can come in, look at it, and come away with a good understanding of what the grant is all about. If the storyboard is coherent, then it’s easy to make the proposal coherent as well. Further, the storyboard often gets reused in a modified fashion as the grant’s central graphic. Yes, a picture is worth several thousand words.

My colleague is onto something profound about how grant review works, across all funders, including those in the private sector.  But for this issue of Science Policy Insider, we’re going to consider the agency where I headed up Biological Sciences, the NSF. What about NIH, you may ask? A lot of the principles here go for both agencies. But here, we’re going to focus, laser-like, on the National Science Foundation, even as it undergoes drastic changes.

The Brutal Reality of NSF Panel Review

After sitting through too many grant panels at NSF, I can tell you this: most proposals get 15-20 minutes of discussion time in a panel that’s reviewing 30-50 proposals over three days. Your carefully crafted 15-page research plan? The primary reviewer read it thoroughly. The other two panelists skimmed it. Everyone else glanced at the summary.

This isn’t because reviewers are lazy. They’re exhausted, brilliant researchers who read proposals outside their immediate expertise, often late at night, while also worrying about their own grants, their trainees, and the paper referee statements they owe.

The storyboard approach works because it acknowledges this reality: reviewers are looking for a straightforward narrative they can grasp quickly and defend to the panel.

What Actually Happens in Review Panels

Here’s how it typically unfolds:

9:00 AM, Day Two of panel: The primary reviewer presents your proposal. They have 5 minutes to summarize your aims, approach, and why it matters. If they struggle to articulate your story coherently, you’re in trouble—not because your proposed science is bad, but because they can’t effectively advocate for you.

The secondary and tertiary reviewers add their perspectives. Then the panel discusses. The program officers watch for enthusiasm, coherence of the argument, and whether anyone is deeply opposed.

The proposals that succeed have champions—reviewers who “get it” immediately and can explain why it matters to others. The storyboard method facilitates championing reviewability.

What Reviewers Actually Look For

After watching this process play out thousands of times, here’s what I learned reviewers truly care about:

1. Can I explain this to the panel in 3 minutes?

If your research plan requires a flowchart to understand, the primary reviewer will simplify it—possibly incorrectly. Better to give them the simplified version yourself.

2. Is the question worth answering?

Not “is this interesting?” but “will anyone care about the answer?” Reviewers need to justify spending taxpayer money. Give them that justification explicitly.

3. Can this person actually do this?

No matter what is written down in the solicitation, preliminary data matters enormously, but not for the reason applicants think. It’s not about proving the hypothesis—it’s about proving you have the technical capability and haven’t missed an obvious problem.

4. Is this the right approach?

Reviewers are surprisingly forgiving about whether your specific hypothesis is correct. They’re much less forgiving about whether you’re using appropriate methods or have thought through alternatives.

5. Will this move the field forward?

Notice: not “revolutionize” or “transform”—just move forward. Incremental progress from a well-designed study beats a transformative idea with unclear methods. But doesn’t the call state that the proposed work should change the world? Sure, but from a practical standpoint, what counts for the reviewers is steady progress. And here’s the tricky part: while steady is key for the reviewers, transformative really is important for the program officers who make the penultimate decision. So, a balance is necessary.

What Reviewers Ignore (Even Though You Spent Weeks on It)

The extensive literature review: They skim it to see if you know the field. The 47 citations demonstrating your comprehensive knowledge? They checked that you cited the key papers and moved on.

Your detailed budget justification: Unless something looks wildly off, reviewers assume you know what your research costs. The line-by-line explanation of why you need that particular microscope? Skimmed.

Your publication list: They look at: Do you publish in good journals? Are you productive? Have you published on this topic before? That’s it. The distinction between your 47th and 52nd paper doesn’t matter.

The broader impacts section that you agonized over: I feel guilty about this because, I’ve often harped about broader impacts as a central criterion. Truth: most reviewers read this quickly to verify you addressed it competently. Unless it’s either exceptional or terrible, it rarely drives funding decisions. And these days, broader impacts means how the work will benefit all American citizens (think public health) or US National security.

The Elements That Actually Drive Decisions

Clarity of the research goals: Can the reviewer recite your three main questions without looking at the proposal? If not, rewrite.

Logical flow: Does each aim build on the previous one? Or are they three unrelated projects stapled together? Reviewers can tell.

Feasibility signals: Preliminary data, established collaborations, access to necessary resources, realistic timeline. These say, “this person will actually complete this work.”

Positioning: Is this filling a real gap, or are you slightly tweaking someone else’s approach? Reviewers want to fund work that moves us somewhere new, even if incrementally.

The writing quality: Clear, direct prose suggests clear thinking. Dense, jargon-heavy writing suggests unclear thinking (even if that’s unfair).

The Most Common Mistake

Applicants try to impress reviewers with complexity and comprehensiveness. They want to show they’ve thought of everything, considered every alternative, read every paper.

But reviewers are looking for clarity and confidence. They want to understand quickly what you’re proposing and why it matters. They want to feel confident you’ll succeed.

The storyboard method works because it forces simplicity. If you can’t draw a simple picture of your proposal that an outsider immediately understands, you don’t have a fundable story yet.

But Wait, There’s More

As hinted at above, at NSF, that panel review…. it’s strictly advisory. I’ve personally seen proposals with excellent reviews get declined and the reverse. The key decisional person? That’s the cognizant program officer for the solicitation. These days, there’s an additional vetting to look for alignment with the Administration’s political goals, but that’s a topic for a future newsletter.

What This Means for Your Proposal

Before you write a single word:

  • Can you explain your project in three sentences?
  • Can someone outside your subfield understand why it matters?
  • Do you have a clear narrative arc from question to approach to impact?

If not, you’re not ready to write. You’re ready to storyboard.

Build the simple, clear story first. Then elaborate carefully, making sure every detail serves that core narrative.

Reviewers are smart, busy people trying to identify good science under time pressure. Don’t make them work to understand your brilliance. Give them a story they can grasp, defend, and champion.

That’s what my colleague understood. And based on his funding success rate, the reviewers appreciate it.

The Replication Crisis Is a Market Failure (And We Designed It That Way)

Also published on my newsletter

The replication crisis isn’t a mystery. After presiding over the review for thousands of grants at NSF’s Biological Sciences Directorate, I can tell you exactly why science struggles to reproduce its own findings: we built incentives that reward novelty and punish verification.

A 2016 Nature survey found that over 70% of scientists have failed to reproduce another researcher’s experiments. But this isn’t about sloppy science or bad actors. It’s straightforward economics.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

The Researcher’s Optimization Problem

You have limited time and resources. You can either:

  1. Pursue novel findings → potential Nature paper, grant funding, tenure
  2. Replicate someone’s work → maybe a minor publication, minimal funding, colleagues questioning your creativity

The expected value calculation is obvious. Replication is a public good with privatized costs.

How NSF Review Panels Work

At NSF, I watched this play out in every review panel. Proposals to replicate existing work faced an uphill battle. Reviewers—themselves successful researchers who got there by publishing novel findings—naturally favor creative, untested ideas over verification work.

We tried various fixes. Some programs explicitly funded replication studies. Some review criteria emphasized robustness over novelty. But the core incentive remained: breakthrough science gets you the next grant; careful verification doesn’t.

The problem runs deeper than any single agency. Universities want prestigious publications. Journals want citations. Researchers want tenure. Nobody’s optimization function includes “produces reliable knowledge that someone else can build on.”

The Information Market Is Broken

Even when researchers try to replicate, they’re working with incomplete information. Methods sections in papers are sanitized versions of what actually happened in the lab. “Cells were cultured under standard conditions” means something different in every lab. One researcher’s gentle mixing is another’s vigorous shaking.

This information asymmetry makes replication attempts inherently inefficient. You’re trying to reproduce a result while missing critical details that the original researcher might not even realize mattered.

The Time Horizon Problem

NSF grants run 3-5 years. Tenure clocks run 6-7 years. But scientific truth emerges over decades. We’re optimizing for the wrong timescale.

During my time at NSF, I saw brilliant researchers make pragmatic choices: publish something surprising now (even if it might not hold up) rather than spend two years carefully verifying it. That’s not a moral failing—it’s responding rationally to the incentives we created.

What Would Actually Fix This

Make replication profitable:

  • Count verification studies equally with novel findings in grant review and tenure decisions
  • Fund researchers whose job is rigorous replication—make it a legitimate career path
  • Require data and detailed methods sharing as a funding condition, not an afterthought
  • Make failed replications as publishable as successful ones

The challenge isn’t technical. It’s institutional. We designed a market that overproduces flashy results and underproduces reliable knowledge. Until we fix the incentives, we’ll keep getting exactly what we’re paying for.

How America Built Its Science Foundation Before the War Changed Everything

Photo by Adarsh Rajput on Pexels.com

Most people think America’s scientific dominance began with the Manhattan Project or the space race. That’s not wrong, but it misses the real story. By the time World War II arrived, we’d already spent decades quietly building the infrastructure that would make those massive wartime projects possible.

The foundation was laid much earlier, and in ways that might surprise you. What’s more surprising is how close that foundation came to crumbling—and what we nearly lost along the way.

The Land-Grant Revolution

The story really starts in 1862 with the Morrill Act—arguably the most important piece of science policy legislation most Americans have never heard of. While the Civil War was tearing the country apart, Congress was simultaneously creating a network of universities designed to teach “agriculture and the mechanic arts.”

This wasn’t just about farming. The land-grant universities were America’s first systematic attempt to connect higher education with practical problem-solving. Schools like Cornell, Penn State, and the University of California weren’t just teaching Latin and philosophy—they were training engineers, studying crop diseases, and developing new manufacturing techniques.

But here’s what’s remarkable: this almost didn’t happen. The 1857 version of Morrill’s bill faced heavy opposition from Southern legislators who viewed it as federal overreach and Western states who objected to the population-based allocation formula. It passed both houses by narrow margins, only to be vetoed by President Buchanan. The legislation succeeded in 1862 primarily because Southern opponents had left Congress to join the Confederacy.

Private Money Fills a Critical Gap

What’s fascinating—and telling—is how much of early American scientific investment came from private philanthropy rather than government funding. The industrial fortunes of the late 1800s flowed into research, but this created a system entirely dependent on individual wealth and personal interest.

The Carnegie Institution of Washington, established in 1902, essentially functioned as America’s first NSF decades before the actual NSF existed. Andrew Carnegie’s $10 million endowment was enormous—equal to Harvard’s entire endowment and vastly more than what all American universities spent on basic research combined. The Rockefeller Foundation transformed medical education and research on a similar scale.

But imagine if Carnegie had been less interested in science, or if the robber baron fortunes had flowed entirely into art collections and European estates instead. This mixed ecosystem worked, but it was inherently unstable. When economic conditions tightened, private funding could vanish. When wealthy patrons died, research priorities shifted with their successors’ interests.

Corporate Labs: Innovation with Built-In Vulnerabilities

By the 1920s, major corporations were establishing research laboratories. General Electric’s lab, founded in 1900 as the first industrial research facility in America, became the model. Bell Labs, created in 1925 through the consolidation of AT&T and Western Electric research, would later become legendary for discoveries that shaped the modern world.

These corporate labs solved an important problem, bridging the gap between scientific discovery and commercial application. But they also created troubling dependencies. Research priorities followed profit potential, not necessarily national needs. Breakthrough discoveries in fundamental physics might be abandoned if they didn’t promise immediate commercial returns.

More concerning, these labs were vulnerable to economic cycles. During the Great Depression, even well-established research programs faced significant budget cuts and staffing reductions.

Government Stays Reluctantly on the Sidelines

Through all of this, the federal government remained a hesitant, minor player. The National Institute of Health, created in 1930 with a modest $750,000 for building construction, was one of the few exceptions—and even then, the federal government rarely funded medical research outside its own laboratories before 1938.

Most university science departments survived on whatever they could patch together from donors, industry partnerships, and minimal federal grants. The system worked, but precariously. During the Depression, university budgets were slashed, enrollment dropped, and research programs had to be scaled back or eliminated. The National Academy of Sciences saw its operating and maintenance funds drop by more than 15 percent each year during the early 1930s.

The Foundation That Held—Barely

By 1940, America had assembled what looked like a robust scientific infrastructure, but it was actually a precarious arrangement held together by fortunate timing and individual initiative. Strong universities teaching practical skills, generous private funding that could shift with economic conditions, corporate labs vulnerable to business cycles, and minimal federal involvement.

When the war suddenly demanded massive scientific mobilization, the infrastructure held together long enough to support the Manhattan Project, radar development, and other crucial innovations. But it was a closer thing than most people realize. The Depression had already demonstrated the system’s vulnerabilities—funding cuts, program reductions, and the constant uncertainty that came with depending on private largesse.

What We Nearly Lost

Looking back, what’s remarkable isn’t just how much America invested in science before 1940, but how easily much of it could have been lost to economic downturns, shifting private interests, or political opposition. That decentralized mix of public and private initiatives created innovation capacity, but it also created significant vulnerabilities.

The war didn’t just expand American science—it revealed how unstable our previous funding system had been and demonstrated what sustained, coordinated investment could accomplish. The scientific breakthroughs that defined the next half-century emerged not from the patchwork system of the 1930s, but from the sustained federal commitment that followed.

Today’s scientific leadership isn’t an accident of American ingenuity. It’s the direct result of lessons learned from a system that worked despite its fragility—and the decision to build something more reliable in its place. The question is whether we remember why that change was necessary, and what we might lose if we return to depending on unstable, decentralized funding for our most critical research needs.

Post lunch conversation with a colleague: trust in science

Yesterday, I had lunch with a colleague at a favorite BBQ spot in Arlington. Both of us work in science communication, so naturally our conversation drifted to the question that’s been nagging at many of us: why has public trust in scientific institutions declined in recent years? By the time we finished our, actually healthy food, we’d both come to the same conclusion—the current way scientists communicate with the public might be contributing to the problem.

From vaccine hesitancy to questions about research reliability, the relationship between science and society has grown more complex. To understand this dynamic, we need to examine not only what people think about science but also how different cultures approach the validation of knowledge itself.

Harvard scholar Sheila Jasanoff offers valuable insights through her concept of “civic epistemologies”—the cultural practices societies use to test and apply knowledge in public decision-making. These practices vary significantly across nations and help explain why scientific controversies unfold differently in different places.

American Approaches to Knowledge Validation

Jasanoff’s research identifies distinctive features of how Americans evaluate scientific claims:

Public Challenge: Americans tend to trust knowledge that has withstood open debate and questioning. This reflects legal traditions where competing arguments help reveal the truth.

Community Voice: There’s a strong expectation that affected groups should participate in discussions about scientific evidence that impacts them, particularly in policy contexts.

Open Access: Citizens expect transparency in how conclusions are reached, including access to underlying data and reasoning processes.

Multiple Perspectives: Rather than relying on single authoritative sources, Americans prefer hearing from various independent institutions and experts.

How This Shapes Science Communication

These cultural expectations help explain some recent communication challenges. When public health recommendations changed during the COVID-19 pandemic, this appeared to violate expectations for thorough prior testing of ideas. Similarly, when social platforms restricted specific discussions, this conflicted with preferences for open debate over gatekeeping.

In scientific fields like neuroscience, these dynamics have actually driven positive reforms. When research reliability issues emerged, the American response emphasized transparency solutions: open data sharing, study preregistration, and public peer review platforms. Major funding agencies now require data management plans that promote accountability.

Interestingly, other countries have addressed similar scientific quality concerns in different ways. European approaches have relied more on institutional reforms and expert committees, while American solutions have emphasized broader participation and transparent processes.

Digital Platforms and Knowledge

Online platforms have both satisfied and complicated American expectations. They provide the transparency and diverse voices people want, but the sheer volume of information makes careful evaluation difficult. Platforms like PubPeer enable post-publication scientific review that aligns with cultural preferences for ongoing scrutiny; however, the same openness can also amplify misleading information.

Building Better Science Communication

Understanding these cultural patterns suggests more effective approaches:

Acknowledge Uncertainty: Present science as an evolving process rather than a collection of final answers. This matches realistic expectations about how knowledge develops.

Create Meaningful Participation: Include affected communities in research priority-setting and policy discussions, following successful models in patient advocacy and environmental research.

Increase Transparency: Share reasoning processes and data openly. Open science practices align well with cultural expectations for accountability.

Recognize Broader Concerns: Understand that skepticism often reflects deeper questions about who participates in knowledge creation and whose interests are served.

Moving Forward

Public skepticism toward science isn’t simply a matter of misunderstanding—it often reflects tensions between scientific institutions and cultural expectations about legitimate authority. Rather than dismissing these expectations, we might develop communication approaches that honor both scientific rigor and democratic values.

The goal isn’t eliminating all skepticism, which serves essential functions in healthy societies. Instead, it channels critical thinking in ways that strengthen our collective ability to address complex challenges that require scientific insight.

Zero-based budgeting experiment: US STEM

Photo by Pixabay on Pexels.com

At research universities, zero-based budgeting is pretty rare. It means starting from zero expenditures and then justifying each budget line to reach an annual budget. It is frowned upon for long-term R&D projects for the apparent reason that it’s pretty challenging to predict a discovery that could be exploited to produce a measurable outcome.

Nevertheless, it’s worth considering using the process to optimize the entire US STEM/Biomedical enterprise from scratch.

Why Research Resists Zero-Based Budgeting

The resistance to zero-based budgeting in research environments stems from legitimate concerns. Academic institutions seldom adhere to a zero-based budget model because, as I stated above, scientific discovery is inherently unpredictable, and zero-based budgets require a significant amount of time and labor from units and university administrators to prepare, and this model can seriously encumber long-term planning.

Research requires substantial upfront investments in equipment, facilities, and human capital that only pay dividends over extended periods. The peer review system, while imperfect, has evolved as a way to allocate resources based on scientific merit rather than easily quantifiable metrics.

The Case for a National Reset

Despite these concerns, there’s a compelling argument for applying zero-based budgeting principles to the broader American STEM enterprise. Not at the individual project level, but at the systemic level—questioning fundamental assumptions about how we organize, fund, and conduct research.

Addressing Systemic Inefficiencies

Our current research ecosystem has evolved organically over decades, creating layers of bureaucracy, redundant administrative structures, and misaligned incentives. Universities compete for the same federal funding while maintaining parallel administrative infrastructures. A zero-based approach would force examination of whether these patterns serve our ultimate goals of scientific progress and national competitiveness.

Responding to Global Competition

The US still retains a healthy lead, spending $806 billion on R&D, both public and private, in 2021, but China is rapidly closing the gap. The Chinese government recently announced a massive $52 billion investment in research and development for 2024 — a 10% surge over the previous year, while the U.S. cut total investment in research and development for fiscal 2024 by 2.7%.

China had significantly increased its R&D investment, contributing over 24 percent of total global funding according to data from the Congressional Research Service, while the U.S. total remains strong, CRS data show that its share of total global expenditure dropped to just under 31 percent in 2020, down from nearly 40 percent in 2000.

Realigning with National Priorities

AI, pandemic preparedness, cybersecurity, and advanced manufacturing require coordinated, interdisciplinary approaches that don’t always fit neatly into existing departmental structures or funding categories. Starting from zero would allow us to design funding mechanisms that better align with strategic priorities while preserving fundamental research.

A Practical Framework

Implementing zero-based budgeting for the STEM enterprise could be approached systematically:

Phase 1: Comprehensive Mapping Begin by mapping the current research ecosystem—funding flows, personnel, infrastructure, outputs, and outcomes. This alone would be valuable, as we currently lack a complete picture of resource allocation.

Phase 2: Goal Setting Involve stakeholders in defining desired outcomes. What should American STEM research accomplish in the next 10-20 years? How do we balance basic research with applied research?

Phase 3: Pilot Implementation Rather than overhauling everything at once, implement zero-based approaches in specific domains or regions to identify what works while minimizing disruption.

Potential Benefits and Risks

A thoughtful application could yield improved efficiency by eliminating redundant processes, better alignment with national priorities, enhanced collaboration across institutional silos, and increased agility to respond to emerging threats.

However, any major reform involves significant risks. There’s danger of disrupting productive research programs, alienating talented researchers, or creating unintended bureaucratic complications. The political and logistical challenges would be immense.

Moreover, China has now surpassed the US in “STEM talent production, research publications, patents, and knowledge-and technology-intensive manufacturing”, suggesting that while spending matters, other factors are equally important.

Preserving What Works

Zero-based budgeting shouldn’t mean discarding what has made American research successful. The peer review system has generally identified quality research. The tradition of investigator-initiated research has fostered creativity and serendipitous discoveries. The partnership between universities, government, and industry has created a dynamic innovation ecosystem.

The goal isn’t elimination but examination of whether these elements are being implemented most effectively.

Conclusion

The idea of applying zero-based budgeting to American STEM research deserves serious consideration. By questioning assumptions, eliminating inefficiencies, and realigning priorities, we can create a research enterprise better positioned to tackle 21st-century challenges.

The process itself—careful examination of how we conduct and fund research—could be as valuable as specific reforms. In an era when Based on current enrollment patterns, China is projected to produce more than 77,000 STEM PhD graduates per year compared to approximately 40,000 in the United States by 2025, representing nearly double the US output., the ability to thoughtfully reimagine our institutions may be our greatest asset.

The question isn’t whether we can afford to undertake such a comprehensive review. The question is whether we can afford not to.