The research team moved carefully through the forest canopy platform at dusk, nets ready. In Gabon and the Republic of Congo during the mid-2000s, international ecologists were hunting for the reservoir host of Ebola virus. They targeted fruit bat colonies—hammer-headed bats, Franquet’s epauletted bats, little collared fruit bats—collecting blood samples and oral swabs.
By December 2005, they had their answer, published in Nature. They’d found Ebola RNA and antibodies in three species of fruit bats across Central Africa. For years, scientists had known Ebola emerged periodically, but couldn’t identify where the virus persisted between human epidemics. This research provided the answer: fruit bats, widely distributed and increasingly in contact with humans as deforestation pushed people deeper into forests.
Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.
That discovery triggered a wave of follow-up research, much of it funded through the Ecology and Evolution of Infectious Diseases program—EEID—a joint NSF-NIH-USDA initiative I would later help oversee. EEID-funded teams documented how human activities created spillover opportunities: bushmeat hunting, agricultural expansion into bat habitat, mining operations bringing workers into forests. They identified cultural practices that facilitated transmission: burial traditions, preparation of bushmeat, children playing with dead animals. They built mathematical models of how Ebola moved from bats to humans and then through human populations. The science showed where Ebola lived, how it spilled over, and which human behaviors created risk.
Yet nine years after that initial Nature paper—after years of EEID-funded research mapping Ebola ecology—the virus emerged in Guinea in late 2013 and was identified in March 2014. A two-year-old boy, likely exposed through contact with bats, became patient zero. Within months, the outbreak had spread to Liberia and Sierra Leone. By 2016, more than 28,000 people were infected and 11,000 died. The economic impact exceeded $2.8 billion.
I was leading NSF’s Biological Sciences Directorate at the time, overseeing NSF’s role in EEID. We had funded years of follow-up research. We knew fruit bats harbored Ebola. We had models for predicting transmission. We had mapped high-risk regions. And yet 11,000 people died anyway. All of this was foreshadowing what would happen with SARS-CoV-2 later and on a much larger scale.
Here is the uncomfortable question I’ve been wrestling with ever since: If we funded the right science and had years of warning, why were we not better prepared?
What EEID Was Supposed to Do
EEID launched in 2000 because infectious disease ecology fell between agency missions. NSF supported ecology but wasn’t focused on disease. NIH funded disease research but wasn’t equipped for field ecology. USDA cared about agricultural diseases but not the broader ecological context. The program brought all three together: NSF’s ecological expertise, NIH’s disease knowledge, and USDA’s understanding of agricultural-wildlife interfaces.
The administrative structure was elegant on paper. All proposals submitted through NSF underwent joint review by all three agencies, and then any agency could fund meritorious proposals based on mission fit. For Ebola research, this meant NSF might fund the bat ecology, NIH’s Fogarty International Center might support the human health surveillance component, and USDA might fund work on bushmeat practices—different pieces of the same puzzle, coordinated through a single program.
The program typically made 6-10 awards per year, totaling $15-25 million across agencies. Not huge money, but enough to support interdisciplinary teams working across continents. And it worked—EEID funded excellent science at the intersection of ecology and disease that no single agency could have supported alone.
Why Interagency Collaboration Is Genuinely Hard
When I arrived at NSF in 2014 with the outbreak at its peak, I inherited EEID oversight and quickly discovered that elegant-on-paper doesn’t mean easy-in-practice. The deepest challenges weren’t administrative—they were cultural.
NSF and NIH approach science from fundamentally different starting points. NSF’s mission is discovery-driven basic research. When NSF reviewers evaluate proposals, they ask: Is this important science? Will it advance the field? NIH’s mission is health-focused and translational. NIH reviewers want to know: Will this help prevent or treat disease? What’s the public health significance?
I saw this play out in a particularly contentious panel meeting around 2016. Our panelists were reviewing a proposal on rodent-borne hantaviruses in the southwestern U.S.—excellent ecology, good epidemiology, solid modeling. The NSF reviewers loved it: beautiful natural history, important insights about how environmental variability affects transmission. The NIH reviewers were skeptical: where was the preliminary data on human infection? How would this lead to intervention?
An hour passed debating what constituted “good preliminary data.” For NSF reviewers, the PI’s previous work establishing field sites was sufficient—it showed feasibility. NIH reviewers wanted preliminary data on the virus itself, on infection rates. They weren’t being unreasonable—they were applying NIH’s standards. But we were talking past each other.
That debate crystallized the challenge. Two agencies with different cultures had to agree on the same proposals. Sometimes it created productive tension. Sometimes it just meant frustration.
The administrative burden on investigators was worse than we acknowledged. When NIH selected a proposal for funding instead of NSF, the PI had to completely reformat everything for NIH’s system—different page limits, different budget structures, different reporting requirements. This could add 3-6 months to award start dates. Try explaining to a collaborator in Guinea why you don’t know which U.S. agency will fund your project or when you’ll actually get money.
For program officers, EEID meant constant coordination overhead—meetings to discuss priorities, coordinating review panel schedules across agencies, negotiating which agency would fund which proposals. This work wasn’t counted in official program costs, but it was real. Hours we could have spent on other portfolio management.
Despite all this friction, EEID succeeded at its core mission. It funded research that advanced both fundamental science and disease understanding. When the 2014 Ebola outbreak hit, epidemiologists reached for transmission models developed through EEID grants. The program had trained a generation of researchers in genuinely interdisciplinary work.
What the 2014 Outbreak Exposed
But here’s what haunts me: we funded the science but not the systems. By 2014, nearly a decade of research had confirmed fruit bats as Ebola reservoirs, mapped their distribution across Africa, and identified high-risk human-bat contact zones. Papers were published in top journals. And then… nothing. No one built surveillance systems in West African villages where contact with bats was common. No one established early warning networks. No one created mechanisms to translate “we found Ebola in these bats” into “we’re monitoring for spillover in Guinea.”
EEID funded research, not surveillance. That’s appropriate—it’s a research program, not an operational public health system. But there was no mechanism to bridge the gap. When EEID-funded scientists discovered important findings, those findings stayed in academic papers. They didn’t flow to CDC, didn’t trigger surveillance efforts, didn’t inform preparedness planning.
During our quarterly coordination calls with NIH and USDA program officers, the question would occasionally arise: Who’s responsible for acting on what we’re learning? If EEID research identifies high-risk pathogen reservoirs, whose job is it to establish surveillance? The answer was usually silence, then acknowledgment that it wasn’t our job—we fund research—but uncertainty about whose job it was.
The missing infrastructure was organizational, not intellectual. We knew enough to be better prepared. The problem was lack of systems to act on knowledge. No agency was responsible for translating academic research into surveillance systems. CDC focuses on domestic diseases. NIH funds research but doesn’t run operations overseas. USAID’s PREDICT program did fund surveillance but didn’t have coverage in Guinea. We had pieces of the puzzle but no mechanism to assemble them.
I remember discussions about whether EEID should become more operational—perhaps requiring funded projects to include surveillance components. The response was always that this would fundamentally change the program’s character. NSF resists mission-directed research. My former agency’s strength is supporting investigator-driven discovery. Making EEID operational would require multiple agencies and authorities, and, most importantly, substantially more funding. A research program can’t solve an operational preparedness gap.
The scale problem was obvious. At $15- $ 25 million per year, EEID could support excellent science but not comprehensive surveillance. Think about what that would require: ongoing monitoring in multiple countries, relationships with local health systems, rapid response capacity, and laboratory infrastructure. This requires hundreds of millions annually, not tens of millions.
The timeline mismatch was equally frustrating. Research operates on slow timescales—EEID grants ran five years, and from proposal to publication might take 6-7 years. The initial bat reservoir discovery was published in 2005. If that had immediately triggered surveillance in West Africa, we’d have had nearly nine years before the 2014 outbreak. But triggering surveillance takes decisions, funding, international coordination—processes that themselves take years. By the time anyone might have acted, attention had moved elsewhere.
What This Means for Pandemic Preparedness
The most troubling insight: we knew enough to be better prepared for Ebola, and later for COVID-19, but knowledge alone wasn’t enough. EEID succeeds at advancing knowledge but can’t create surveillance systems, can’t fund operational preparedness, can’t bridge the gap between discovering threats and preventing epidemics. That gap is organizational and political, not scientific.
Should we expand EEID? More funding would support more projects, but it wouldn’t solve the fundamental problem. You could triple EEID’s budget and still have the research-to-surveillance gap. More papers about bat reservoirs don’t automatically create early warning systems. The limitation isn’t insufficient research funding—it’s absence of operational systems to act on research findings.
We need something structurally different. Here’s what I’d do:
First, create a rapid-response funding mechanism within EEID. When Ebola emerged in 2014, imagine if researchers could have gotten funding within weeks to investigate transmission dynamics and surveillance in surrounding regions, rather than waiting for the next annual competition. Model this on NSF’s RAPID program—streamlined review, modest awards ($100-200K for one year), quick deployment—but create an entirely different pocket of money for it from all the participating funders.
Second, establish formal connections between EEID and operational agencies. This is the biggest gap. Require EEID-funded researchers to submit one-page “surveillance implications” memos with final reports, which program officers share with CDC, USAID, and WHO. Better yet, have CDC or BARDA co-fund some EEID proposals with clear surveillance applications. Create visiting scholar programs where CDC epidemiologists spend time with EEID research teams and vice versa.
Third, strengthen international partnerships with genuine co-leadership. The 2014 outbreak showed the cost of inadequate surveillance infrastructure in West Africa. Expand EEID to include more disease hotspot regions—India, Brazil, Indonesia, DRC, West African nations—where foreign investigators can be lead PIs, foreign institutions receive and administer funds, and research priorities reflect host country needs. This isn’t altruism—it’s pragmatic self-interest.
The Larger Lesson
Interagency collaboration is genuinely hard—the friction I described isn’t fixable through better management. It’s inherent when bringing together organizations with different missions and cultures. EEID proves such collaboration can work and produce excellent science. But it requires sustained effort, goodwill, and tolerance for complexity.
The alternative—each agency in its silo—is worse. Infectious disease ecology requires expertise no single agency possesses. Complex problems require complex solutions. EEID demonstrated this is possible. The challenge is making it sufficient.
What haunts me is that we’re probably going to repeat the pattern. Right now, post-COVID, pandemic preparedness has political salience. But history suggests this won’t last. After the 2014-2016 Ebola outbreak, there was similar urgency. Within a few years, budgets declined and attention shifted. USAID’s PREDICT program was terminated in 2019—just months before COVID—due to budget constraints. We cut surveillance funding during a quiet period, then paid an enormous price when the next pandemic hit.
Prevention is invisible. We never know which pandemics we successfully prevented. There’s no constituency defending preparedness funding when cuts loom. That’s the structural problem we haven’t solved.
What Needs to Happen
Will we learn from EEID’s experience and build the infrastructure we need? Or will we fund the right research but lack systems to act on it—again?
The answer depends on recognizing that pandemic preparedness isn’t primarily a scientific challenge—we know enough—but an organizational and political one. Can we create structures spanning research and operations? Can we sustain funding between crises? Can we build systems robust enough to survive political leadership changes?
EEID succeeded at what a research program can do: funding excellent science that advanced understanding. The larger failure—inadequate pandemic preparedness—requires solutions at different organizational levels. But EEID’s experience provides a foundation: proof that interagency collaboration can work, that we can identify threats before they become catastrophes.
The team in Central African forests collecting bat samples did their job. They found the virus, mapped the threat, advanced our understanding. The question for the rest of us—program officers, policymakers, public health officials, citizens who fund this through taxes—is whether we’ll do our job: building systems that turn knowledge into prevention.
Science can identify threats. But preventing pandemics requires more than science. It requires sustained organizational commitment, interagency coordination, international cooperation, and political will—especially during quiet periods when threats seem distant. EEID demonstrated the scientific component is feasible.
The rest is up to us. And based on what I’ve seen, I’m not optimistic we’ll get it right before the next one hits.