When Agencies Collaborate: What EEID Teaches Us About Pandemic Preparedness

The research team moved carefully through the forest canopy platform at dusk, nets ready. In Gabon and the Republic of Congo during the mid-2000s, international ecologists were hunting for the reservoir host of Ebola virus. They targeted fruit bat colonies—hammer-headed bats, Franquet’s epauletted bats, little collared fruit bats—collecting blood samples and oral swabs.

By December 2005, they had their answer, published in Nature. They’d found Ebola RNA and antibodies in three species of fruit bats across Central Africa. For years, scientists had known Ebola emerged periodically, but couldn’t identify where the virus persisted between human epidemics. This research provided the answer: fruit bats, widely distributed and increasingly in contact with humans as deforestation pushed people deeper into forests.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

That discovery triggered a wave of follow-up research, much of it funded through the Ecology and Evolution of Infectious Diseases program—EEID—a joint NSF-NIH-USDA initiative I would later help oversee. EEID-funded teams documented how human activities created spillover opportunities: bushmeat hunting, agricultural expansion into bat habitat, mining operations bringing workers into forests. They identified cultural practices that facilitated transmission: burial traditions, preparation of bushmeat, children playing with dead animals. They built mathematical models of how Ebola moved from bats to humans and then through human populations. The science showed where Ebola lived, how it spilled over, and which human behaviors created risk.

Yet nine years after that initial Nature paper—after years of EEID-funded research mapping Ebola ecology—the virus emerged in Guinea in late 2013 and was identified in March 2014. A two-year-old boy, likely exposed through contact with bats, became patient zero. Within months, the outbreak had spread to Liberia and Sierra Leone. By 2016, more than 28,000 people were infected and 11,000 died. The economic impact exceeded $2.8 billion.

I was leading NSF’s Biological Sciences Directorate at the time, overseeing NSF’s role in EEID. We had funded years of follow-up research. We knew fruit bats harbored Ebola. We had models for predicting transmission. We had mapped high-risk regions. And yet 11,000 people died anyway. All of this was foreshadowing what would happen with SARS-CoV-2 later and on a much larger scale.

Here is the uncomfortable question I’ve been wrestling with ever since: If we funded the right science and had years of warning, why were we not better prepared?

What EEID Was Supposed to Do

EEID launched in 2000 because infectious disease ecology fell between agency missions. NSF supported ecology but wasn’t focused on disease. NIH funded disease research but wasn’t equipped for field ecology. USDA cared about agricultural diseases but not the broader ecological context. The program brought all three together: NSF’s ecological expertise, NIH’s disease knowledge, and USDA’s understanding of agricultural-wildlife interfaces.

The administrative structure was elegant on paper. All proposals submitted through NSF underwent joint review by all three agencies, and then any agency could fund meritorious proposals based on mission fit. For Ebola research, this meant NSF might fund the bat ecology, NIH’s Fogarty International Center might support the human health surveillance component, and USDA might fund work on bushmeat practices—different pieces of the same puzzle, coordinated through a single program.

The program typically made 6-10 awards per year, totaling $15-25 million across agencies. Not huge money, but enough to support interdisciplinary teams working across continents. And it worked—EEID funded excellent science at the intersection of ecology and disease that no single agency could have supported alone.

Why Interagency Collaboration Is Genuinely Hard

When I arrived at NSF in 2014 with the outbreak at its peak, I inherited EEID oversight and quickly discovered that elegant-on-paper doesn’t mean easy-in-practice. The deepest challenges weren’t administrative—they were cultural.

NSF and NIH approach science from fundamentally different starting points. NSF’s mission is discovery-driven basic research. When NSF reviewers evaluate proposals, they ask: Is this important science? Will it advance the field? NIH’s mission is health-focused and translational. NIH reviewers want to know: Will this help prevent or treat disease? What’s the public health significance?

I saw this play out in a particularly contentious panel meeting around 2016. Our panelists were reviewing a proposal on rodent-borne hantaviruses in the southwestern U.S.—excellent ecology, good epidemiology, solid modeling. The NSF reviewers loved it: beautiful natural history, important insights about how environmental variability affects transmission. The NIH reviewers were skeptical: where was the preliminary data on human infection? How would this lead to intervention?

An hour passed debating what constituted “good preliminary data.” For NSF reviewers, the PI’s previous work establishing field sites was sufficient—it showed feasibility. NIH reviewers wanted preliminary data on the virus itself, on infection rates. They weren’t being unreasonable—they were applying NIH’s standards. But we were talking past each other.

That debate crystallized the challenge. Two agencies with different cultures had to agree on the same proposals. Sometimes it created productive tension. Sometimes it just meant frustration.

The administrative burden on investigators was worse than we acknowledged. When NIH selected a proposal for funding instead of NSF, the PI had to completely reformat everything for NIH’s system—different page limits, different budget structures, different reporting requirements. This could add 3-6 months to award start dates. Try explaining to a collaborator in Guinea why you don’t know which U.S. agency will fund your project or when you’ll actually get money.

For program officers, EEID meant constant coordination overhead—meetings to discuss priorities, coordinating review panel schedules across agencies, negotiating which agency would fund which proposals. This work wasn’t counted in official program costs, but it was real. Hours we could have spent on other portfolio management.

Despite all this friction, EEID succeeded at its core mission. It funded research that advanced both fundamental science and disease understanding. When the 2014 Ebola outbreak hit, epidemiologists reached for transmission models developed through EEID grants. The program had trained a generation of researchers in genuinely interdisciplinary work.

What the 2014 Outbreak Exposed

But here’s what haunts me: we funded the science but not the systems. By 2014, nearly a decade of research had confirmed fruit bats as Ebola reservoirs, mapped their distribution across Africa, and identified high-risk human-bat contact zones. Papers were published in top journals. And then… nothing. No one built surveillance systems in West African villages where contact with bats was common. No one established early warning networks. No one created mechanisms to translate “we found Ebola in these bats” into “we’re monitoring for spillover in Guinea.”

EEID funded research, not surveillance. That’s appropriate—it’s a research program, not an operational public health system. But there was no mechanism to bridge the gap. When EEID-funded scientists discovered important findings, those findings stayed in academic papers. They didn’t flow to CDC, didn’t trigger surveillance efforts, didn’t inform preparedness planning.

During our quarterly coordination calls with NIH and USDA program officers, the question would occasionally arise: Who’s responsible for acting on what we’re learning? If EEID research identifies high-risk pathogen reservoirs, whose job is it to establish surveillance? The answer was usually silence, then acknowledgment that it wasn’t our job—we fund research—but uncertainty about whose job it was.

The missing infrastructure was organizational, not intellectual. We knew enough to be better prepared. The problem was lack of systems to act on knowledge. No agency was responsible for translating academic research into surveillance systems. CDC focuses on domestic diseases. NIH funds research but doesn’t run operations overseas. USAID’s PREDICT program did fund surveillance but didn’t have coverage in Guinea. We had pieces of the puzzle but no mechanism to assemble them.

I remember discussions about whether EEID should become more operational—perhaps requiring funded projects to include surveillance components. The response was always that this would fundamentally change the program’s character. NSF resists mission-directed research. My former agency’s strength is supporting investigator-driven discovery. Making EEID operational would require multiple agencies and authorities, and, most importantly, substantially more funding. A research program can’t solve an operational preparedness gap.

The scale problem was obvious. At $15- $ 25 million per year, EEID could support excellent science but not comprehensive surveillance. Think about what that would require: ongoing monitoring in multiple countries, relationships with local health systems, rapid response capacity, and laboratory infrastructure. This requires hundreds of millions annually, not tens of millions.

The timeline mismatch was equally frustrating. Research operates on slow timescales—EEID grants ran five years, and from proposal to publication might take 6-7 years. The initial bat reservoir discovery was published in 2005. If that had immediately triggered surveillance in West Africa, we’d have had nearly nine years before the 2014 outbreak. But triggering surveillance takes decisions, funding, international coordination—processes that themselves take years. By the time anyone might have acted, attention had moved elsewhere.

What This Means for Pandemic Preparedness

The most troubling insight: we knew enough to be better prepared for Ebola, and later for COVID-19, but knowledge alone wasn’t enough. EEID succeeds at advancing knowledge but can’t create surveillance systems, can’t fund operational preparedness, can’t bridge the gap between discovering threats and preventing epidemics. That gap is organizational and political, not scientific.

Should we expand EEID? More funding would support more projects, but it wouldn’t solve the fundamental problem. You could triple EEID’s budget and still have the research-to-surveillance gap. More papers about bat reservoirs don’t automatically create early warning systems. The limitation isn’t insufficient research funding—it’s absence of operational systems to act on research findings.

We need something structurally different. Here’s what I’d do:

First, create a rapid-response funding mechanism within EEID. When Ebola emerged in 2014, imagine if researchers could have gotten funding within weeks to investigate transmission dynamics and surveillance in surrounding regions, rather than waiting for the next annual competition. Model this on NSF’s RAPID program—streamlined review, modest awards ($100-200K for one year), quick deployment—but create an entirely different pocket of money for it from all the participating funders.

Second, establish formal connections between EEID and operational agencies. This is the biggest gap. Require EEID-funded researchers to submit one-page “surveillance implications” memos with final reports, which program officers share with CDC, USAID, and WHO. Better yet, have CDC or BARDA co-fund some EEID proposals with clear surveillance applications. Create visiting scholar programs where CDC epidemiologists spend time with EEID research teams and vice versa.

Third, strengthen international partnerships with genuine co-leadership. The 2014 outbreak showed the cost of inadequate surveillance infrastructure in West Africa. Expand EEID to include more disease hotspot regions—India, Brazil, Indonesia, DRC, West African nations—where foreign investigators can be lead PIs, foreign institutions receive and administer funds, and research priorities reflect host country needs. This isn’t altruism—it’s pragmatic self-interest.

The Larger Lesson

Interagency collaboration is genuinely hard—the friction I described isn’t fixable through better management. It’s inherent when bringing together organizations with different missions and cultures. EEID proves such collaboration can work and produce excellent science. But it requires sustained effort, goodwill, and tolerance for complexity.

The alternative—each agency in its silo—is worse. Infectious disease ecology requires expertise no single agency possesses. Complex problems require complex solutions. EEID demonstrated this is possible. The challenge is making it sufficient.

What haunts me is that we’re probably going to repeat the pattern. Right now, post-COVID, pandemic preparedness has political salience. But history suggests this won’t last. After the 2014-2016 Ebola outbreak, there was similar urgency. Within a few years, budgets declined and attention shifted. USAID’s PREDICT program was terminated in 2019—just months before COVID—due to budget constraints. We cut surveillance funding during a quiet period, then paid an enormous price when the next pandemic hit.

Prevention is invisible. We never know which pandemics we successfully prevented. There’s no constituency defending preparedness funding when cuts loom. That’s the structural problem we haven’t solved.

What Needs to Happen

Will we learn from EEID’s experience and build the infrastructure we need? Or will we fund the right research but lack systems to act on it—again?

The answer depends on recognizing that pandemic preparedness isn’t primarily a scientific challenge—we know enough—but an organizational and political one. Can we create structures spanning research and operations? Can we sustain funding between crises? Can we build systems robust enough to survive political leadership changes?

EEID succeeded at what a research program can do: funding excellent science that advanced understanding. The larger failure—inadequate pandemic preparedness—requires solutions at different organizational levels. But EEID’s experience provides a foundation: proof that interagency collaboration can work, that we can identify threats before they become catastrophes.

The team in Central African forests collecting bat samples did their job. They found the virus, mapped the threat, advanced our understanding. The question for the rest of us—program officers, policymakers, public health officials, citizens who fund this through taxes—is whether we’ll do our job: building systems that turn knowledge into prevention.

Science can identify threats. But preventing pandemics requires more than science. It requires sustained organizational commitment, interagency coordination, international cooperation, and political will—especially during quiet periods when threats seem distant. EEID demonstrated the scientific component is feasible.

The rest is up to us. And based on what I’ve seen, I’m not optimistic we’ll get it right before the next one hits.

How Will You Know You’ve Succeeded? A BRAIN story

August 2008: A summer day in Mountain View California. The previous year, In 2007, The Krasnow Institute for Advanced Study, which I was leading at George Mason University, had developed a proposal to invest tons of money in figuring out how mind emerges from brains and now I had to make the case that it deserved to be a centerpiece of a new administration’s science agenda. Three billion dollars is not a small ask, especially in the context of the 2008 financial crisis that was accelerating.

Before this moment, the project had evolved organically: a kickoff meeting at the Krasnow Institute near D.C., a joint manifesto published in Science Magazine, and then follow-on events in Des Moines, Berlin and Singapore to emphasize the broader aspects of such a large neuroscience collaboration. There even had been a radio interview with Oprah.

When I flew out to Google’s Mountain View headquarters in August 2008 for the SciFoo conference, I didn’t expect to be defending the future of neuroscience over lunch. But the individual who was running the science transition for the Obama Presidential Campaign, had summoned me for what he described as a “simple” conversation: defend our idea for investing $3 billion over the next decade in neuroscience with the audacious goal of explaining how “mind” emerges from “brains.” It was not the kind of meeting I was ready for.

I was nervous. As an institute director, I’d pitched for million-dollar checks. This was a whole new scale of fundraising for me. And though, California was my native state, I’d never gone beyond being a student body president out there. Google’s headquarters in summer of 2008 was an altar to Silicon Valley power.

SciFoo itself was still in its infancy then – the whole “unconference” concept felt radical and exciting, a fitting backdrop for pitching transformational science. But the Obama campaign wasn’t there for the unconventional meeting format. Google was a convenient meeting spot. And they wanted conventional answers.

I thought I made a compelling case: this investment could improve the lives of millions of patients with brain diseases. Neuroscience was on the verge of delivering cures. (I was wrong about that, but I believed it at the time.) The tools were ready. The knowledge was accumulating. We just needed the resources to put it all together.

Then I was asked the question that killed my pitch: “How will we know we have succeeded? What’s the equivalent of Kennedy’s moon landing – a clear milestone that tells us we’ve achieved what we set out to do?” You could see those astronauts come down the ladder of the lunar module. You could see that American flag on the moon. No such prospects with a large neuroscience initiative.

I had no answer.

I fumbled through some vague statements about understanding neural circuits and developing new therapies, but even as the words left my mouth, I knew they were inadequate. The moon landing worked as a political and scientific goal because it was binary: either we put a man on the moon or we didn’t. Either the flag was planted or it wasn’t.

But “explaining how mind emerges from brains”? When would we know we’d done that? What would success even look like?

The lunch ended politely. I flew back to DC convinced it had been an utter failure.

But that wasn’t the end of it. Five years later, at the beginning of Obama’s second presidential term, we began to hear news of a large initiative driven by the White House called the Brain Activity Map or BAM for short. The idea was to comprehensively map the functional activity of brains at high spatial and temporal resolution beyond that available at the time. It was like my original pitch both in scale (dollars) and in the notion that it was important to understand how mind emerges from brain function. The goal for the new BAM project was to be able to map between the activity and the brain’s emergent “mind”-like behavior, both in the healthy and pathological cases. But the BAM project trial balloon, even coming from the White House, was not an immediate slam dunk.

There was immediate push-back from large segments of the neuroscience community that felt excluded from BAM, but with a quick top-down recalibration from the White House Office of Science and Technology Policy and a whole of government approach that included multiple science agencies, BRAIN (Brain Research through Advancing Innovative Neurotechnologies) was born in April of 2013.

A year later, in April of 2014, I was approached to head Biological Sciences at the US National Science Foundation. When I took the job that October, I was leading a directorate with a budget of $750 million annually that supported research across the full spectrum of the life sciences – from molecular biology to ecosystems. I would also serve as NSF’s co-lead for the Obama Administration’s BRAIN Initiative—an acknowledgement of the failed pitch in Mountain View, I guess.

October 2014: sworn in and meeting with my senior management team–now here I was, a little more than a year into BRAIN. I had gotten what I’d asked for in Mountain View. Sort of. We had the funding, we had the talent, we had review panels evaluating hundreds of proposals. But I kept thinking about the question—the one I couldn’t answer then and still struggled with now. We had built this entire apparatus for funding transformational research, yet we were asking reviewers to apply the same criteria that would have rejected Einstein’s miracle year. How do you evaluate research when you can’t articulate clear success metrics? How do you fund work that challenges fundamental assumptions when your review criteria reward preliminary data and well-defined hypotheses?

Several months later, testifying before Congress about the BRAIN project, I remember fumbling again at the direct question of when we would deliver cures for dreaded brain diseases like ALS and Schizophrenia. I punted: that was an NIH problem (even though the original pitch had been about delivering revolutionary treatments. At NSF, we were about understanding the healthy brain. In fact, how could you ever understand brain disease without a deep comprehension of the non-pathological condition?

It was a reasonable bureaucratic answer. NIH does disease; NSF does basic science. Clean jurisdictional boundaries. But sitting there in that hearing room, I realized I was falling into the same trap that had seemingly doomed our pitch in 2008: on being asked for the delivery date of a clear criterion for success, I was waffling. Only this time, I was the agent for the funder: the American taxpayer.

The truth was uncomfortable. We had launched an initiative explicitly designed to support transformational research – research that would “show us how individual brain cells and complex neural circuits interact” in ways we couldn’t yet imagine. But when it came time to evaluate proposals, we fell back on the same criteria that favored incrementalism: preliminary data, clear hypotheses, established track records, well-defined deliverables. We were asking Einstein for preliminary data on special relativity.

And we weren’t unique. This was the system. This was how peer review worked across federal science funding. We had built an elaborate apparatus designed to be fair, objective, and accountable to Congress and taxpayers. What we had built was a machine that systematically filtered out the kind of work that might transform neuroscience.

All of this was years before the “neuroscience winter”—where massive scientific misconduct was unearthed in neurodegenerative disease research—which included Alzheimer’s. But the modus operandi of BRAIN foreshadowed it.

Starting in 2022, a series of investigations revealed that some of the most influential research on Alzheimer’s disease—work that had shaped the field for nearly two decades and guided billions in research funding—was built on fabricated data. Images had been manipulated. Results had been doctored. And this work had sailed through peer review at top journals, had been cited thousands of times, and had successfully competed for grant funding year after year. The amyloid hypothesis, which this fraudulent research had bolstered, had become scientific orthodoxy not because the evidence was overwhelming, but because it fit neatly into the kind of clear, well-defined research program that review panels knew how to evaluate.

Here was the other side of the Einstein problem that I’ve mentioned in previous posts. The same system that would have rejected Einstein’s 1905 papers for lack of preliminary data and institutional support had enthusiastically funded research that looked rigorous but was fabricated. Because the fraudulent work had all the elements that peer review rewards: clear hypotheses, preliminary data, incremental progress building on established findings, well-defined success metrics. It looked like good science. It checked all the boxes.

Meanwhile, genuinely transformational work—the kind that challenges fundamental assumptions, that crosses disciplinary boundaries, that can’t provide preliminary data because the questions are too new—struggles to get funded. Not because reviewers are incompetent or malicious, but because we’ve built a system that is literally optimized to make these mistakes. We’ve created an apparatus that rewards the appearance of rigor over actual discovery, that favors consensus over challenge, that funds incrementalism and filters out transformation.

So, what’s the real function of peer review? It’s supposed to be about identifying transformative research, but I don’t think that the real purpose. To my mind, the real purpose of peer review panels at NSF, the study sections at NIH, is to make inherently flawed funding decisions defensible—both to Congress and the American taxpayer. The criteria, intellectual merit, broader impacts at NSF, make awarding grant dollars auditable and fair seeming, not because they identify breakthrough work.

But honestly, there’s a real dilemma here: if you gave out NSF’s annual budget based on a program officer’s feeling that “this seems promising”, you’d face legitimate questions about cronyism, waste and arbitrary decision-making. The current system’s flaws aren’t bad policy accidents; they are the price we pay for other values we also care about.

So, did the BRAIN Initiative deliver on that pitch I made in Mountain View in 2008? Did we figure out how ‘mind’ emerges from ‘brains’? In retrospect, I remain super impressed by NSF’s  NeuroNex program: we got impressive technology – better ways to record from more neurons, new imaging techniques, sophisticated tools. We trained a generation of neuroscientists. But that foundational question – the one that made the political case, the one that justified the investment – we’re not meaningfully closer to answering it. We made incremental progress on questions we already knew how to ask. Which is exactly what peer review is designed to deliver. Oh, and one other thing that was produced: NIH’s parent agency, the Department of Health and Human Services,  got a trademark issued on the name of the initiative itself, BRAIN.

I spent four years as NSF’s co-lead on BRAIN trying to make transformational neuroscience happen within this system. I believed in it. I still believe in federal science funding. But I’ve stopped pretending the tension doesn’t exist. The very structure that makes BRAIN funding defensible to Congress made the transformational science we promised nearly impossible to deliver.

That failed pitch at Google’s headquarters in 2008. Turns out that the question was spot on we just never answered it.

Why Transformational Science Can’t Get Funded: The Einstein Problem

Proposal declined. Insufficient institutional support. No preliminary data. Applicant lacks relevant expertise—they work in a patent office, not a research laboratory. The proposed research is too speculative and challenges well-established physical laws without adequate justification. The principal investigator is 26 years old and has no prior experience in physics.

This would have been the fate of Albert Einstein in 1905, had the NSF existed as it does today. Even with grant calls requesting ‘transformative ideas,’ an Einstein proposal would have been rejected outright. And yet, that year 1905 has been called Einstein’s miracle year. Yes, he was a patent clerk working in Bern, Switzerland, without a university affiliation. He had neither access to a laboratory nor equipment. He worked in isolation on evenings and weekends and was unknown in the physics community. Yet, despite those disadvantages, he produced four revolutionary papers on the Photoelectric Effect, Brownian motion, Special Relativity, and the famous E=mc2 energy-mass equivalence.

Taken as a whole, the work was purely theoretical. There were no preliminary data. The papers challenged fundamental assumptions of the field and, as such, were highly speculative and definitively high-risk. There were no broader impacts because there were no immediate practical applications. And the work was inherently multidisciplinary, bridging mechanics, optics, and thermodynamics. Yet, the work was transformative. By modern grant standards, Einstein’s work failed every criterion.

The Modern Grant Application – A Thought Experiment

Let’s imagine Einstein’s 1905 work packaged as a current NSF proposal. What would it look like, and how would it fare in peer review?

Einstein’s Hypothetical NSF Proposal

Project Title: Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light

Principal Investigator: Albert Einstein, Technical Expert Third Class, Swiss Federal Patent Office

Institution: None (individual applicant)

Requested Duration: 3 years

Budget: $150,000 (minimal – just salary support and travel to one conference)

Project Summary

This proposal challenges the fundamental assumptions underlying Newtonian mechanics and Maxwell’s electromagnetic theory. I propose that space and time are not absolute but relative, dependent on the observer’s state of motion. This requires abandoning the concept of the luminiferous ether and reconceptualizing the relationship between matter and energy. The work will be entirely theoretical, relying on thought experiments and mathematical derivation to establish a new framework for understanding physical reality.

How NSF Review Panels Would Evaluate This

Intellectual Merit: Poor

Criterion: Does the proposed activity advance knowledge and understanding?

Panel Assessment: The proposal makes extraordinary claims without adequate preliminary data. The applicant asserts that Newtonian mechanics—the foundation of physics for over 200 years—requires fundamental revision yet provides no experimental evidence supporting this radical departure.

Specific Concerns:

Lack of Preliminary Results: The proposal contains no preliminary data demonstrating the feasibility of the approach. There are no prior publications by the applicant in peer-reviewed physics journals. The applicant references his own unpublished manuscripts, which cannot be evaluated.

Methodology Insufficient: The proposed “thought experiments” do not constitute rigorous scientific methodology. How will hypotheses be tested? What experimental validation is planned? The proposal describes mathematical derivations but provides no pathway to empirical verification. Without experimental confirmation, these remain untestable speculations.

Contradicts Established Science: The proposal challenges Newton’s laws of motion and the existence of the luminiferous ether—concepts supported by centuries of successful physics. While scientific progress requires questioning assumptions, such fundamental challenges require extraordinary evidence. The applicant provides none.

Lack of Expertise: The PI works at a patent office and has no formal research position. He has no advisor supporting this work, no collaborators at research institutions, and no track record in theoretical physics. His biosketch lists a doctorate from the University of Zurich but no subsequent research appointments or publications in relevant areas.

Representative Reviewer Comments:

Reviewer 1: “While the mathematical treatment shows some sophistication, the fundamental premise—that simultaneity is relative—contradicts basic physical intuition and has no experimental support. The proposal reads more like philosophy than physics.”

Reviewer 2: “The applicant’s treatment of the photoelectric effect proposes that light behaves as discrete particles, directly contradicting Maxwell’s well-established wave theory. This is not innovation; it’s contradiction without justification.”

Reviewer 3: “I appreciate the applicant’s ambition, but this proposal is not ready for funding. I recommend the PI establish himself at a research institution, publish preliminary findings, and gather experimental evidence before requesting support for such speculative work. Perhaps a collaboration with experimentalists at a major university would strengthen future submissions.”

Broader Impacts: Very Poor

Criterion: Does the proposed activity benefit society and achieve specific societal outcomes?

Panel Assessment: The proposal fails to articulate any concrete broader impacts. The work is purely theoretical with no clear pathway to societal benefit.

Specific Concerns:

No Clear Applications: The proposal does not explain how reconceptualizing space and time would benefit society. What problems would this solve? What technologies would it enable? The PI suggests the work is “fundamental” but provides no examples of potential applications.

No Educational Component: There is no plan for training students or postdocs. The PI works alone at a patent office, with no access to students and no institutional infrastructure for education and training.

No Outreach Plan: The proposal includes no activities to communicate findings to the public or policymakers. There is no plan for broader dissemination beyond potential publication in physics journals.

Questionable Impact Timeline: Even if the proposed theories are correct, the proposal provides no timeline for practical applications. How long until these ideas translate into societal benefit? The proposal is silent on this critical question.

Representative Reviewer Comments:

Reviewer 1: “The broader impacts section is essentially non-existent. The PI states that ‘fundamental understanding of nature has intrinsic value,’ but this does not meet NSF’s requirement for concrete societal outcomes.”

Reviewer 2: “I cannot envision how this work, even if successful, would lead to practical applications within a reasonable timeframe. The proposal needs to articulate a clear pathway from theory to impact.”

Reviewer 3: “NSF has limited resources and must prioritize research with demonstrable benefits to society. This proposal does not make that case.”

Panel Summary and Recommendation

Intellectual Merit Rating: Poor
Broader Impacts Rating: Very Poor

Overall Assessment: While the panel appreciates the PI’s creativity and mathematical ability, the proposal is highly speculative, lacks preliminary data, contradicts established physical laws without sufficient justification, and fails to articulate broader impacts. The PI’s lack of institutional affiliation and research track record raises concerns about feasibility.

The panel notes that the PI appears talented and encourages resubmission after:

  1. Establishing an independent position at a research institution
  2. Publishing preliminary findings in peer-reviewed journals
  3. Developing collaborations with experimental physicists
  4. Articulating a clearer pathway to practical applications
  5. Demonstrating broader impacts through education and outreach

Recommendation: Decline

Panel Consensus: Not competitive for funding in the current cycle. The proposal would need substantial revision and preliminary results before it could be considered favorably.

The Summary Statement Einstein Would Receive

Dear Dr. Einstein,

Thank you for your submission to the National Science Foundation. Unfortunately, your proposal, “Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light,” was not recommended for funding.

The panel recognized your ambition and mathematical capabilities but identified several concerns that prevented a favorable recommendation:

– Lack of preliminary data supporting the feasibility of your approach – Insufficient experimental validation of your theoretical claims
– Absence of institutional support and research infrastructure – Inadequate articulation of broader impacts and societal benefits

We encourage you to address these concerns and consider resubmission in a future cycle. You may wish to establish collaborations with experimentalists and develop a clearer pathway from theory to application.

We appreciate your interest in NSF funding and wish you success in your future endeavors.

Sincerely,
NSF Program Officer

And that would be it. Einstein’s miracle year—four papers that transformed physics and laid the groundwork for quantum mechanics, nuclear energy, GPS satellites, and our modern understanding of the cosmos—would have died in peer review, never funded, never attempted.

The system would have protected us from wasting taxpayer dollars on such speculation. It would have worked exactly as designed.

The Preliminary Data Paradox

The contemporary scientific grant review process implicitly expects foundational work in transformative science to present preliminary data, despite knowing that truly groundbreaking ideas often do not originate from such tangible evidence but instead evolves through thought experiments and mathematical derivations, as Einstein did. This unrealistic expectation stifles innovation at its core – the process essentially forces researchers like Einstein to abandon pure theoretical exploration and confine them to a narrow experimental framework, where they cannot freely challenge existing paradigms, even when their work holds no immediate empirical validation yet promises to revolutionize our understanding fundamentally.

The Risk-Aversion Problem

Often, in grant reviews, I see a very junior reviewer criticize work as being too risky—dooming the proposal to failure—while simultaneously sensing their admiration for the promise and transformative nature of the work. The conservative nature and risk-averse mentality of modern grant review panels are deeply rooted in the scientific community’s culture that values incremental advances over speculative leaps – a bias born from career motivations wherein funding decisions can make or break one’s professional trajectory. Reviewers often exhibit reluctance to invest support into proposals like Einstein’s, as they pose potential controversy and may not align with personal research interests due to the associated risks of failure – a reflection of how science has traditionally evolved through evolutionary rather than revolutionary processes within academic institutions.

The Credentials Catch-22

To secure funding in today’s scientific landscape, one often needs institutional affiliation and an impressive publication record that reflects strong research credentials – a catch-22 scenario wherein groundbreaking innovators with no formal backing or prior experience find it challenging to gain the trust of reviewers. This requirement discriminates against fresh perspectives from individuals such as Einstein, who was working outside established institutions and did not have access to mentorship, which is typically deemed necessary for academic recognition – a stark contrast in how transformative outsider thinkers with unconventional backgrounds historically nurtured science.

The Short-Term Timeline Problem

Einstein developed special relativity over years with no milestones, no quarterly reports, no renewals. How would he answer, ‘What will you accomplish in Year 2?” The funding cycle durations set forth by major grant agencies, such as NSF’s typical three to five years for regular grants and the NIH’s maximum of five years, do not accommodate the long periods necessary for fully developing foundational theories that require time-intensive evolution. Such timelines impose an unfair constraint on researchers like Einstein, whose transformative ideas did not evolve within strict milestones but unfolded in an unconstrained fashion – showcasing the incompatibility of this model with truly revolutionary scientific discoveries where a linear progression is unrealistic and even counterproductive.

The Impact Statement Trap

Requirements for demonstrating immediate “broader impacts” or societal benefits pose significant obstacles to transformative research proposals that often envision far-reaching implications beyond their direct applications – an aspect Einstein’s work exemplifies best with its foundational role in advancing physics. The trap lies when reviewers, fearing potential misuse of speculative science or unable to perceive future benefits due to cognitive biases, force research proposals into a mold where immediate practical impact takes precedence over visionary scientific contributions, further marginalizing transformative studies that could potentially unlock new dimensions in various fields.

The Interdisciplinary Gap

The inherent disciplinarity of current grant funding schemes disconnects them from the interdisciplinary essence required for revolutionary research proposals like Einstein’s – a reality where transformative work frequently transcends conventional academic boundaries by merging concepts across multiple fields. This approach often results in an exclusion not only based on institutional affiliation but also because of its challenge to compartmentalized funding models that struggle with the non-linear, cross-disciplinary nature integral to truly transformative science – a significant obstacle for proposals inherently interdisciplinary yet unable to fit neatly within program structures or expertise.

The hypothetical funding scenarios for transformational science, as presented through the lens of Albert Einstein’s groundbreaking work, illustrate the inherent challenges faced by revolutionary ideas. To further highlight this problem, let’s take a look at other seminal discoveries that may have been overlooked or deemed unworthy of support under current grant review criteria:

Copernicus’ Heliocentric Model: In a contemporary setting, Copernicus’ heliocentric model might face skepticism due to its challenge to the widely accepted geocentric view of the universe. Lacking preliminary data and facing resistance from established religious beliefs, his proposal would likely be rejected under modern grant review criteria, despite its ultimate validation through observation and mathematical proof.

Gregor Mendel’s Pea Plant Experiments: The foundation of modern genetics was laid by Mendel’s pea plant experiments, yet his work remained largely unnoticed for decades after its initial publication. A grant reviewer in 1863 would likely have dismissed Mendel’s findings as too speculative and without immediate practical applications, thereby overlooking the fundamental insights he provided about heredity and genetic inheritance.

mRNA Vaccines: Katalin Karikó spent decades struggling to fund mRNA therapeutic research. Too risky. Too speculative. No clear applications. Penn demoted her. NIH rejected her grants. Reviewers wanted proof that mRNA could work as a therapeutic platform, but without funding, she couldn’t generate that proof. Then COVID-19 hit, and mRNA vaccines saved millions of lives. The technology that couldn’t get funded became one of the most important medical breakthroughs of the century.

Why does all of this matter now? First, the evidence is mounting that American science is at an inflection point. The rate of truly disruptive discoveries—those that reshape fields rather than incrementally advance them—has been declining for decades, even as scientific output has grown. Both NSF and NIH leadership recognize this troubling trend.

This innovation crisis manifests in the problems we cannot solve. Cancer and Alzheimer’s have resisted decades of intensive research. AI alignment and safety remain fundamentally unsolved as we deploy increasingly powerful systems. We haven’t returned to the moon in over 50 years. In my own field of neuroscience, incremental progress has failed to produce treatments for the diseases that devastate millions of families.

These failures point to a deeper problem: we’ve optimized our funding system for incremental advances, not transformational breakthroughs. Making matters worse, we’re losing ground internationally. China’s funding models allow longer timelines and embrace higher risk. European ERC grants support more adventurous research. Many of our best researchers now weigh opportunities overseas or in industry, where they can pursue riskier ideas with greater freedom.

What Needs to Change

Fixing this requires fundamental changes at multiple levels—from how we structure programs to how we evaluate proposals to how we support unconventional researchers.

Create separate funding streams for high-risk research. NSF and NIH need more programs that emulate DARPA’s high-risk, high-reward model. These programs should be insulated from traditional grant review: no preliminary data required, longer timelines (10+ years), and peer review conducted by scientists who have themselves taken major risks and succeeded. I propose that 10 percent of each agency’s budget be set aside for “Einstein Grants”—awards that take the view opposite the status quo. Judge proposals on originality and potential impact, not feasibility and preliminary data. Accept that most will fail, but the few that succeed will be transformational.

Protect exploratory research within traditional programs. Even standard grant programs should allow pivots when researchers discover unexpected directions. We should fund people with track records of insight, not just projects with detailed timelines. Judge proposals on the quality of thinking, not the completeness of deliverables.

Reform peer review processes. The current system needs three critical changes. First, separate review tracks for incremental versus transformational proposals—they require fundamentally different evaluation criteria. Second, don’t let a single negative review kill bold ideas; if three reviewers are enthusiastic and one is skeptical, fund it. Third, value originality over feasibility. The most transformational ideas often sound impossible until someone proves otherwise.

Support alternative career paths. We should fund more researchers outside traditional academic institutions and recognize that the best science doesn’t always emerge from R1 universities. Explicitly value interdisciplinary training and create flexible career paths that don’t punish researchers who take time to develop unconventional ideas. Track where our most creative researchers go when they leave academia—if we’re consistently losing them to industry or foreign institutions, that’s a failure signal we must heed.

Acknowledge the challenge ahead. These reforms require sustained political will across multiple administrations and consistent support from Congress. They demand patience—accepting that transformational breakthroughs can’t be scheduled or guaranteed. But the alternative is clear: we continue optimizing for incremental progress while the fundamental problems remain unsolved and our international competitors embrace the risk we’ve abandoned.

The choice before us is stark. We can optimize the current system for productivity—incremental papers, measurable progress—or we can create space for transformative discovery. We cannot have both with the same funding mechanisms.

The cost of inaction is clear: we will miss the next Einstein, fall further behind in fundamental discovery, watch science become a bureaucratic exercise, and lose what made American science into a powerhouse of discovery.

This requires action at every level. Scientists must advocate for reform and be willing to champion risky proposals. Program officers must have the courage to fund work that reviewers call too speculative. Policymakers must create new funding models and resist the temptation to demand near-term results. The public must understand that breakthrough science looks different from incremental progress—it’s messy, unpredictable, and often wrong before it’s right.

In 1905, Einstein changed our understanding of the universe while working in a patent office with no grant funding. Today, our funding system would never have let him try. We need to fix that.

Jasons ordered to close up shop

This is an interesting development. The Jasons Group is an elite cadre of academics who have conducted research studies for the DOD on a variety of topics over the last 60 years or so. More recently NSF has been interested in hiring the Jasons to look at the increasingly challenging climate for international collaborations between US scientists and their foreign counterparts (something that I have written a bit about). Now this news, that the Jasons contract with DOD is to be terminated. Given the views of the Administration on international collaborations of any kind, I wonder if the two things are related?

Mid-term election: science implications I

Most of the results of the mid-term election are now in and can be reviewed on-line. Jeff Mervis at SCIENCE has a nice summer of what the changes in the House mean, here. My own sense is that with Eddie Bernice Johnson (D-TX) as the likely chair for House Science, the tenor of that Committee’s relationship with the non-biomedical US Science R&D agencies is going to improve significantly. Specifically with regards to Climate Change, and more generally with regards to a less adversarial oversight role. I think that’s probably a good thing.

NASA and probably also NSF lost a key advocate in John Culberson (R-TX) as chair of CJS, the appropriations committee responsible for the two agencies. On the other hand, NASA will probably be able to finesse the timing of when they send a probe to Europa and NSF’s contacts with Chinese science may be a bit less fettered (although the one from the White House is still pretty hawkish).

Barbara Comstock’s loss in Virginia is complex. While she could be a thorn in the side of NSF (e.g. NEON), she was extremely supportive of the DC metro area federal workforce and this benefited science agencies who depend on expert staff to keep the wheels moving.

My sense is that NIH is still coming out of this smelling like a rose. A more conservative senate may put the brakes on some hot-button research topics, but in general, I am pretty  optimistic about the biomedical sector.

 

One proposal per year…

I’m hearing a lot about NSF BIO’s new policy of one proposal per year for each Principal Investigator. In general, I’m hearing complaints from more senior investigators and positive interest from younger ones. This is somewhat counter-intuitive for me since I’d expect junior PI’s to be quite anxious to get as many proposals as possible in within the time window of their tenure clock. But I suppose they also see this new policy as potentially reducing the competition from the old fogies (an aside, this is the same logic of those who rejoice when NSF or NIH have funding downturns because they see those as driving out the competition).

In any case, I’m agnostic about this. It is certainly good that NSF is discouraging the recycling of proposal failures. I find it annoying that I can only be PI on one proposal for the coming year–although it will incentivize me to make it as excellent as possible. I do think that the rather negative report on this new policy in SCIENCE was insufficiently nuanced and would be happy to discuss with the reporter.

The latest from NEON

NEON, the National Ecological Observatory Network, is a major research instrumentation asset that the NSF has built for scientists investigating how the environment and ecosystems interact at a continental scale. Here is the latestIMG_1104.jpg from Observatory Director and Chief Scientist, Sharon Collinge. It’s really good to see that this project is coming to a successful fruition.

There’s no photo credit on the image because it’s my photo. I took it at the NEON tower at Harvard Forest in central Massachusetts. Among many data products being produced, one of the most exciting are carbon flux measurements using the eddy-flux methodology. These are important because they provide a window into an ecosystem as it essentially breathes, just like we do. And that has enormous implications for climate change.

The location of this particular NEON tower (one of many across the United States) is particularly interesting because there is also a very long time series (25 years or so) of such measurements produced by the Ameriflux Network. If NEON can take advantage of such older measurements in a way that calibrates rigorously between the two systems, the power of continental scale (3-dimensions) will be enriched by a fourth dimension, time.

A bit about my new gig….

The summer break here at George Mason is coming to an end, classes begin in about two weeks and I thought it would good to write a bit about my new life as a plain old professor here at the Schar School. When I left NSF in January, I had negotiated my return to the University to reflect the public policy experience involved in running the Biological Sciences Directorate. Additionally, it had become clear to me that after 23 years in one administrative role after another, I wanted a change in the direction of more time to teach and do research. So when it was approved that my faculty line would be moved from the Krasnow Institute to the Schar School here in Arlington I was really jazzed. There was the additional benefit that the commuting distance would be halved.

I did start though with some trepidation. I had effectively been out of academia for more than three years—that in spite of NSF’s program for supporting rotator to stay involved with research at their home institution. That might work at the Program Director level at NSF, but it’s really not practical when you are responsible for an entire directorate. As a result, I was very rusty from the standpoint of both teaching and research—the two things I would be expected to do as a professor. Hence, it was a real confidence builder to get a grant in the first weeks that I was back and to actually jump back into teaching (rather than worrying about it).

 

I find that these past months have been some of the most satisfying of my life from a professional standpoint. The sheer pleasure of quiet time to think about science rather than have to instantly react to some crisis is something not to be underestimated. And I have found that my interests extend across a much wider landscape than before I left Mason for NSF. My current grant is on AI. The next one will probably be on metagenomics. Who knows what will come next!