The Board That Couldn’t Be Fired (Until It Was)

On Friday, April 25, each of the 24 members of the National Science Board received a short email from the Presidential Personnel Office. It read, in full: “On behalf of President Donald J. Trump, I am writing to inform you that your position as a member of the National Science Board is terminated, effective immediately. Thank you for your service.”

No explanation. No warning. No phone call.

I’ve worked closely with the National Science Board — both as NSF’s Assistant Director for Biological Sciences and as a co-author of a 2022 piece in Issues in Science and Technology that argued, in some detail, for restructuring the Board’s role. I know what the NSB does. I know where its governance model has worked, and where it has struggled. And I want to explain why Friday’s action represents a significant break from the historical pattern of how administrations — including ones that were frustrated with the Board — have navigated this relationship. I also want to suggest that the disruption, as damaging as it is in the short term, might create an opening for a structural reform that is long overdue.

The founding compromise

NSF and the National Science Board were created together in 1950. Their joint structure was itself a political compromise. Vannevar Bush had wanted something like a corporate board — presidentially appointed, with authority to hire and fire the foundation’s director. Senator Kilgore wanted a director accountable directly to the president, not to any board. What emerged was a blend of both visions, with one key question settled in Kilgore’s favor: the director would be accountable to the president, not to the board. But Bush got his presidentially appointed board, and both entities were given shared executive authority over the foundation.

This “two-headed structure,” as NSF historian J. Merton England called it, has no real analog elsewhere in the federal government. NASA’s administrator, for instance, holds unambiguous authority over that agency. NSF’s director manages day-to-day operations and serves as the public face of the foundation — but the Board retains statutory authority to set policy and to approve the foundation’s largest awards and infrastructure investments. That authority is written into the National Science Foundation Act. It cannot be waived by an OMB directive, and it cannot be delegated away.

A structure that has survived previous tensions

The dual-authority structure has never been frictionless. Every administration since Truman has, at some point, found the NSB’s independence inconvenient. Nixon’s science advisers viewed the Board’s independence as an obstacle to the president’s control over research priorities. Reagan’s OMB pushed to reduce NSF’s budget in ways the Board opposed. The pattern across 76 years is consistent: administrations applied pressure, the Board pushed back, and the institutional structure held. The resolution was always a political negotiation, not a structural rupture.

What made that equilibrium possible was a shared understanding — across Democratic and Republican administrations alike — that the NSB’s statutory authority wasn’t optional. Appointments could be shaped to favor policy directions, and they often were. But the institution itself, and the staggered six-year terms that insulate members from any single administration’s full control, remained intact.

Friday’s action is the first time in that 76-year history that a president has simply removed the entire Board at once.

What the NSB does — and where it has struggled

In a 2022 article in Issues in Science and Technology, my colleagues Jessica Rosenberg, Nicholas Robichaud, and I argued that NSF’s governance structure needed reform. The NEON crisis — the near-collapse of the National Ecological Observatory Network during construction, which I was directly involved in managing as BIO’s assistant director — had exposed a real tension at the heart of the dual structure. NSB members are selected for scientific distinction, not for the project management and business expertise required to oversee a $500 million infrastructure build. The crisis produced competing ad hoc processes, overlapping oversight bodies, and structural confusion about who was ultimately responsible for what.

Our recommendation was not to eliminate the Board. It was to refocus it — to pull NSB back from day-to-day management functions and lean into what it can do that no other body in the federal government does: provide independent scientific advice to the president and Congress, across administrations, insulated from the political moment by staggered six-year terms.

That’s the NSB that was dismissed on Friday.

What triggered this

The Board’s public criticism last May of the proposed 55% cut to NSF’s budget appears to be the proximate cause. NSB chair Victor McCrary and his colleagues advised Congress to reject the cut. Congress did. Board member Keivan Stassun, an astrophysicist at Vanderbilt, described the dynamic with characteristic precision: this group of presidential appointees was advising Congress not to follow the president’s wishes.

That is, in fact, exactly what the Board’s design anticipates. Staggered six-year terms exist precisely to create a body capable of offering independent scientific judgment even when the political moment runs against it. The same design logic underlies independent inspectors general and the Federal Reserve. The underlying theory is that some decisions — particularly investments in basic research that only pay off over decades — are too consequential to be made entirely within the horizon of any single election cycle.

The Board had also been bypassed on a major facilities decision. OMB told NSF’s chief of research facilities directly that it would build a new Antarctic research icebreaker — a commitment that required statutory Board approval — with no Board involvement whatsoever. When Board members asked what had happened, the answer was essentially: OMB said so. That sequence illustrates the structural tension that preceded Friday’s action.

What the historical record tells us about what comes next

The deeper historical question is not what happens to the 24 dismissed members — it’s what the institution becomes under whatever replaces them.

The NSB’s value as an advisory body has always rested on a specific institutional property: members who serve across administrations, whose six-year terms mean that no single president ever fully controls the Board’s composition. That property is what allows the Board to offer advice — to Congress, to the president, to the public — that is credibly independent of the political moment. It is also, of course, precisely what makes the Board structurally resistant to any administration that would prefer unanimous alignment.

The historical record of analogous institutions suggests the outcome depends heavily on what happens next. Advisory bodies reconstituted with members selected primarily for political alignment tend to lose their epistemic authority relatively quickly — not because their members are unqualified, but because the perception of independence, once lost, is difficult to restore. The Board’s advice to Congress carried weight partly because it came from a body that had, under previous administrations, offered advice those administrations didn’t always want to hear.

Former Board member Alondra Nelson, who resigned in May 2025 after concluding that the Board had been strategically neutralized, described Friday’s development as the progression from erosion to open elimination. The institutional history supports that framing: what changed Friday was not the direction of travel, but the pace.

A note on what NSF does

The foundation is often described in coverage of this story simply as a “grant agency.” That undersells it considerably.

NSF is the primary federal funder of basic research across non-biomedical science and engineering — the Antarctic stations, the major telescopes, the oceanographic vessels, the long-term ecological observatories. The science behind MRI imaging, cellphone technology, and LASIK surgery — all of it traces partly through NSF. More fundamentally, NSF supports the graduate students and early-career investigators who will produce the next generation of discoveries we cannot yet anticipate. The foundation does this on timescales that no administration and no budget cycle can fully capture.

The NSB that was dismissed on Friday was the statutory body designed to protect that investment across exactly those timescales. That design has survived fourteen presidents. What it looks like under the fifteenth remains to be seen.

A possible way forward: rethinking the relationship between NSB and PCAST

Disruptions of this magnitude, whatever their cause, sometimes create space for structural reforms that would otherwise be impossible to negotiate. This may be one of those moments — though the opportunity it presents is more pointed than a simple merger.

The President’s Council of Advisors on Science and Technology — PCAST — is a presidential advisory body created by executive order, serving at the president’s discretion, with no independent statutory authority over federal research agencies. The National Science Board, by contrast, has deep statutory roots, staggered terms, and dual authority over both NSF governance and national science policy advice. In practice, the two bodies have often operated on parallel tracks, producing overlapping reports and competing for the attention of the same senior officials.

What makes the current moment structurally interesting is not that both bodies are vacant — they are not. The current administration reestablished PCAST in January 2025. It announced its membership in March 2026: a roster dominated by technology sector leaders, including Marc Andreessen, Sergey Brin, Larry Ellison, Jensen Huang, and Mark Zuckerberg. PCAST is active. It is the NSB that is now empty.

That juxtaposition defines the real question Friday’s action raises. The current PCAST represents one coherent vision of what science advice to the president should look like: private sector technologists, oriented toward near-term innovation and commercial application, serving entirely at the president’s pleasure. The NSB tradition represents a different vision: academic researchers and scientists, oriented toward long-term basic research and cross-administration continuity, with statutory independence from any single administration’s priorities. These are not merely different rosters — they reflect genuinely different theories of what science policy is for.

The NSB vacancy is the moment for Congress to ask which model it wants to enshrine, and whether a better design might draw from both. One approach worth considering: a consolidated body with tiered membership — a statutory core of Senate-confirmed members with staggered terms, responsible for NSF governance and independent science policy advice to Congress, combined with a rotating presidential advisory panel of shorter-term appointees focused on near-term innovation priorities and direct White House engagement. The statutory core would preserve the independence and cross-administration continuity that gives science advice its credibility with Congress. The presidential panel would provide the responsiveness to current priorities that PCAST has historically supplied.

The design challenge is real: a body with statutory authority and staggered terms is, by design, harder for any sitting president to fully control, which is precisely what gives its advice credibility. Resolving that tension in legislation has never been easy. But the scientific community and its allies on Capitol Hill would be better served by proposing a durable structural solution than by simply advocating for restoration of the status quo ante — a status quo that, as Friday demonstrated, was more fragile than anyone had fully reckoned with.

There were real arguments to be made about reforming how the Board exercises its authority. I made some of them. But the historical record doesn’t offer a precedent for what a mass dismissal — rather than strategic appointment pressure — does to an institution whose core value is independence. We are in new territory.

Victor McCrary said it plainly: “If the White House wants the golden age of science that Trump has promised, now is not the time to go backward. Instead, we need to spend more.”

The history of American science policy suggests he’s right about the investment argument. The structural question now is whether the vacancy created on Friday becomes simply a void or the opening for something more durable.

When Agencies Collaborate: What EEID Teaches Us About Pandemic Preparedness

The research team moved carefully through the forest canopy platform at dusk, nets ready. In Gabon and the Republic of Congo during the mid-2000s, international ecologists were hunting for the reservoir host of Ebola virus. They targeted fruit bat colonies—hammer-headed bats, Franquet’s epauletted bats, little collared fruit bats—collecting blood samples and oral swabs.

By December 2005, they had their answer, published in Nature. They’d found Ebola RNA and antibodies in three species of fruit bats across Central Africa. For years, scientists had known Ebola emerged periodically, but couldn’t identify where the virus persisted between human epidemics. This research provided the answer: fruit bats, widely distributed and increasingly in contact with humans as deforestation pushed people deeper into forests.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

That discovery triggered a wave of follow-up research, much of it funded through the Ecology and Evolution of Infectious Diseases program—EEID—a joint NSF-NIH-USDA initiative I would later help oversee. EEID-funded teams documented how human activities created spillover opportunities: bushmeat hunting, agricultural expansion into bat habitat, mining operations bringing workers into forests. They identified cultural practices that facilitated transmission: burial traditions, preparation of bushmeat, children playing with dead animals. They built mathematical models of how Ebola moved from bats to humans and then through human populations. The science showed where Ebola lived, how it spilled over, and which human behaviors created risk.

Yet nine years after that initial Nature paper—after years of EEID-funded research mapping Ebola ecology—the virus emerged in Guinea in late 2013 and was identified in March 2014. A two-year-old boy, likely exposed through contact with bats, became patient zero. Within months, the outbreak had spread to Liberia and Sierra Leone. By 2016, more than 28,000 people were infected and 11,000 died. The economic impact exceeded $2.8 billion.

I was leading NSF’s Biological Sciences Directorate at the time, overseeing NSF’s role in EEID. We had funded years of follow-up research. We knew fruit bats harbored Ebola. We had models for predicting transmission. We had mapped high-risk regions. And yet 11,000 people died anyway. All of this was foreshadowing what would happen with SARS-CoV-2 later and on a much larger scale.

Here is the uncomfortable question I’ve been wrestling with ever since: If we funded the right science and had years of warning, why were we not better prepared?

What EEID Was Supposed to Do

EEID launched in 2000 because infectious disease ecology fell between agency missions. NSF supported ecology but wasn’t focused on disease. NIH funded disease research but wasn’t equipped for field ecology. USDA cared about agricultural diseases but not the broader ecological context. The program brought all three together: NSF’s ecological expertise, NIH’s disease knowledge, and USDA’s understanding of agricultural-wildlife interfaces.

The administrative structure was elegant on paper. All proposals submitted through NSF underwent joint review by all three agencies, and then any agency could fund meritorious proposals based on mission fit. For Ebola research, this meant NSF might fund the bat ecology, NIH’s Fogarty International Center might support the human health surveillance component, and USDA might fund work on bushmeat practices—different pieces of the same puzzle, coordinated through a single program.

The program typically made 6-10 awards per year, totaling $15-25 million across agencies. Not huge money, but enough to support interdisciplinary teams working across continents. And it worked—EEID funded excellent science at the intersection of ecology and disease that no single agency could have supported alone.

Why Interagency Collaboration Is Genuinely Hard

When I arrived at NSF in 2014 with the outbreak at its peak, I inherited EEID oversight and quickly discovered that elegant-on-paper doesn’t mean easy-in-practice. The deepest challenges weren’t administrative—they were cultural.

NSF and NIH approach science from fundamentally different starting points. NSF’s mission is discovery-driven basic research. When NSF reviewers evaluate proposals, they ask: Is this important science? Will it advance the field? NIH’s mission is health-focused and translational. NIH reviewers want to know: Will this help prevent or treat disease? What’s the public health significance?

I saw this play out in a particularly contentious panel meeting around 2016. Our panelists were reviewing a proposal on rodent-borne hantaviruses in the southwestern U.S.—excellent ecology, good epidemiology, solid modeling. The NSF reviewers loved it: beautiful natural history, important insights about how environmental variability affects transmission. The NIH reviewers were skeptical: where was the preliminary data on human infection? How would this lead to intervention?

An hour passed debating what constituted “good preliminary data.” For NSF reviewers, the PI’s previous work establishing field sites was sufficient—it showed feasibility. NIH reviewers wanted preliminary data on the virus itself, on infection rates. They weren’t being unreasonable—they were applying NIH’s standards. But we were talking past each other.

That debate crystallized the challenge. Two agencies with different cultures had to agree on the same proposals. Sometimes it created productive tension. Sometimes it just meant frustration.

The administrative burden on investigators was worse than we acknowledged. When NIH selected a proposal for funding instead of NSF, the PI had to completely reformat everything for NIH’s system—different page limits, different budget structures, different reporting requirements. This could add 3-6 months to award start dates. Try explaining to a collaborator in Guinea why you don’t know which U.S. agency will fund your project or when you’ll actually get money.

For program officers, EEID meant constant coordination overhead—meetings to discuss priorities, coordinating review panel schedules across agencies, negotiating which agency would fund which proposals. This work wasn’t counted in official program costs, but it was real. Hours we could have spent on other portfolio management.

Despite all this friction, EEID succeeded at its core mission. It funded research that advanced both fundamental science and disease understanding. When the 2014 Ebola outbreak hit, epidemiologists reached for transmission models developed through EEID grants. The program had trained a generation of researchers in genuinely interdisciplinary work.

What the 2014 Outbreak Exposed

But here’s what haunts me: we funded the science but not the systems. By 2014, nearly a decade of research had confirmed fruit bats as Ebola reservoirs, mapped their distribution across Africa, and identified high-risk human-bat contact zones. Papers were published in top journals. And then… nothing. No one built surveillance systems in West African villages where contact with bats was common. No one established early warning networks. No one created mechanisms to translate “we found Ebola in these bats” into “we’re monitoring for spillover in Guinea.”

EEID funded research, not surveillance. That’s appropriate—it’s a research program, not an operational public health system. But there was no mechanism to bridge the gap. When EEID-funded scientists discovered important findings, those findings stayed in academic papers. They didn’t flow to CDC, didn’t trigger surveillance efforts, didn’t inform preparedness planning.

During our quarterly coordination calls with NIH and USDA program officers, the question would occasionally arise: Who’s responsible for acting on what we’re learning? If EEID research identifies high-risk pathogen reservoirs, whose job is it to establish surveillance? The answer was usually silence, then acknowledgment that it wasn’t our job—we fund research—but uncertainty about whose job it was.

The missing infrastructure was organizational, not intellectual. We knew enough to be better prepared. The problem was lack of systems to act on knowledge. No agency was responsible for translating academic research into surveillance systems. CDC focuses on domestic diseases. NIH funds research but doesn’t run operations overseas. USAID’s PREDICT program did fund surveillance but didn’t have coverage in Guinea. We had pieces of the puzzle but no mechanism to assemble them.

I remember discussions about whether EEID should become more operational—perhaps requiring funded projects to include surveillance components. The response was always that this would fundamentally change the program’s character. NSF resists mission-directed research. My former agency’s strength is supporting investigator-driven discovery. Making EEID operational would require multiple agencies and authorities, and, most importantly, substantially more funding. A research program can’t solve an operational preparedness gap.

The scale problem was obvious. At $15- $ 25 million per year, EEID could support excellent science but not comprehensive surveillance. Think about what that would require: ongoing monitoring in multiple countries, relationships with local health systems, rapid response capacity, and laboratory infrastructure. This requires hundreds of millions annually, not tens of millions.

The timeline mismatch was equally frustrating. Research operates on slow timescales—EEID grants ran five years, and from proposal to publication might take 6-7 years. The initial bat reservoir discovery was published in 2005. If that had immediately triggered surveillance in West Africa, we’d have had nearly nine years before the 2014 outbreak. But triggering surveillance takes decisions, funding, international coordination—processes that themselves take years. By the time anyone might have acted, attention had moved elsewhere.

What This Means for Pandemic Preparedness

The most troubling insight: we knew enough to be better prepared for Ebola, and later for COVID-19, but knowledge alone wasn’t enough. EEID succeeds at advancing knowledge but can’t create surveillance systems, can’t fund operational preparedness, can’t bridge the gap between discovering threats and preventing epidemics. That gap is organizational and political, not scientific.

Should we expand EEID? More funding would support more projects, but it wouldn’t solve the fundamental problem. You could triple EEID’s budget and still have the research-to-surveillance gap. More papers about bat reservoirs don’t automatically create early warning systems. The limitation isn’t insufficient research funding—it’s absence of operational systems to act on research findings.

We need something structurally different. Here’s what I’d do:

First, create a rapid-response funding mechanism within EEID. When Ebola emerged in 2014, imagine if researchers could have gotten funding within weeks to investigate transmission dynamics and surveillance in surrounding regions, rather than waiting for the next annual competition. Model this on NSF’s RAPID program—streamlined review, modest awards ($100-200K for one year), quick deployment—but create an entirely different pocket of money for it from all the participating funders.

Second, establish formal connections between EEID and operational agencies. This is the biggest gap. Require EEID-funded researchers to submit one-page “surveillance implications” memos with final reports, which program officers share with CDC, USAID, and WHO. Better yet, have CDC or BARDA co-fund some EEID proposals with clear surveillance applications. Create visiting scholar programs where CDC epidemiologists spend time with EEID research teams and vice versa.

Third, strengthen international partnerships with genuine co-leadership. The 2014 outbreak showed the cost of inadequate surveillance infrastructure in West Africa. Expand EEID to include more disease hotspot regions—India, Brazil, Indonesia, DRC, West African nations—where foreign investigators can be lead PIs, foreign institutions receive and administer funds, and research priorities reflect host country needs. This isn’t altruism—it’s pragmatic self-interest.

The Larger Lesson

Interagency collaboration is genuinely hard—the friction I described isn’t fixable through better management. It’s inherent when bringing together organizations with different missions and cultures. EEID proves such collaboration can work and produce excellent science. But it requires sustained effort, goodwill, and tolerance for complexity.

The alternative—each agency in its silo—is worse. Infectious disease ecology requires expertise no single agency possesses. Complex problems require complex solutions. EEID demonstrated this is possible. The challenge is making it sufficient.

What haunts me is that we’re probably going to repeat the pattern. Right now, post-COVID, pandemic preparedness has political salience. But history suggests this won’t last. After the 2014-2016 Ebola outbreak, there was similar urgency. Within a few years, budgets declined and attention shifted. USAID’s PREDICT program was terminated in 2019—just months before COVID—due to budget constraints. We cut surveillance funding during a quiet period, then paid an enormous price when the next pandemic hit.

Prevention is invisible. We never know which pandemics we successfully prevented. There’s no constituency defending preparedness funding when cuts loom. That’s the structural problem we haven’t solved.

What Needs to Happen

Will we learn from EEID’s experience and build the infrastructure we need? Or will we fund the right research but lack systems to act on it—again?

The answer depends on recognizing that pandemic preparedness isn’t primarily a scientific challenge—we know enough—but an organizational and political one. Can we create structures spanning research and operations? Can we sustain funding between crises? Can we build systems robust enough to survive political leadership changes?

EEID succeeded at what a research program can do: funding excellent science that advanced understanding. The larger failure—inadequate pandemic preparedness—requires solutions at different organizational levels. But EEID’s experience provides a foundation: proof that interagency collaboration can work, that we can identify threats before they become catastrophes.

The team in Central African forests collecting bat samples did their job. They found the virus, mapped the threat, advanced our understanding. The question for the rest of us—program officers, policymakers, public health officials, citizens who fund this through taxes—is whether we’ll do our job: building systems that turn knowledge into prevention.

Science can identify threats. But preventing pandemics requires more than science. It requires sustained organizational commitment, interagency coordination, international cooperation, and political will—especially during quiet periods when threats seem distant. EEID demonstrated the scientific component is feasible.

The rest is up to us. And based on what I’ve seen, I’m not optimistic we’ll get it right before the next one hits.

How Will You Know You’ve Succeeded? A BRAIN story

August 2008: A summer day in Mountain View California. The previous year, In 2007, The Krasnow Institute for Advanced Study, which I was leading at George Mason University, had developed a proposal to invest tons of money in figuring out how mind emerges from brains and now I had to make the case that it deserved to be a centerpiece of a new administration’s science agenda. Three billion dollars is not a small ask, especially in the context of the 2008 financial crisis that was accelerating.

Before this moment, the project had evolved organically: a kickoff meeting at the Krasnow Institute near D.C., a joint manifesto published in Science Magazine, and then follow-on events in Des Moines, Berlin and Singapore to emphasize the broader aspects of such a large neuroscience collaboration. There even had been a radio interview with Oprah.

When I flew out to Google’s Mountain View headquarters in August 2008 for the SciFoo conference, I didn’t expect to be defending the future of neuroscience over lunch. But the individual who was running the science transition for the Obama Presidential Campaign, had summoned me for what he described as a “simple” conversation: defend our idea for investing $3 billion over the next decade in neuroscience with the audacious goal of explaining how “mind” emerges from “brains.” It was not the kind of meeting I was ready for.

I was nervous. As an institute director, I’d pitched for million-dollar checks. This was a whole new scale of fundraising for me. And though, California was my native state, I’d never gone beyond being a student body president out there. Google’s headquarters in summer of 2008 was an altar to Silicon Valley power.

SciFoo itself was still in its infancy then – the whole “unconference” concept felt radical and exciting, a fitting backdrop for pitching transformational science. But the Obama campaign wasn’t there for the unconventional meeting format. Google was a convenient meeting spot. And they wanted conventional answers.

I thought I made a compelling case: this investment could improve the lives of millions of patients with brain diseases. Neuroscience was on the verge of delivering cures. (I was wrong about that, but I believed it at the time.) The tools were ready. The knowledge was accumulating. We just needed the resources to put it all together.

Then I was asked the question that killed my pitch: “How will we know we have succeeded? What’s the equivalent of Kennedy’s moon landing – a clear milestone that tells us we’ve achieved what we set out to do?” You could see those astronauts come down the ladder of the lunar module. You could see that American flag on the moon. No such prospects with a large neuroscience initiative.

I had no answer.

I fumbled through some vague statements about understanding neural circuits and developing new therapies, but even as the words left my mouth, I knew they were inadequate. The moon landing worked as a political and scientific goal because it was binary: either we put a man on the moon or we didn’t. Either the flag was planted or it wasn’t.

But “explaining how mind emerges from brains”? When would we know we’d done that? What would success even look like?

The lunch ended politely. I flew back to DC convinced it had been an utter failure.

But that wasn’t the end of it. Five years later, at the beginning of Obama’s second presidential term, we began to hear news of a large initiative driven by the White House called the Brain Activity Map or BAM for short. The idea was to comprehensively map the functional activity of brains at high spatial and temporal resolution beyond that available at the time. It was like my original pitch both in scale (dollars) and in the notion that it was important to understand how mind emerges from brain function. The goal for the new BAM project was to be able to map between the activity and the brain’s emergent “mind”-like behavior, both in the healthy and pathological cases. But the BAM project trial balloon, even coming from the White House, was not an immediate slam dunk.

There was immediate push-back from large segments of the neuroscience community that felt excluded from BAM, but with a quick top-down recalibration from the White House Office of Science and Technology Policy and a whole of government approach that included multiple science agencies, BRAIN (Brain Research through Advancing Innovative Neurotechnologies) was born in April of 2013.

A year later, in April of 2014, I was approached to head Biological Sciences at the US National Science Foundation. When I took the job that October, I was leading a directorate with a budget of $750 million annually that supported research across the full spectrum of the life sciences – from molecular biology to ecosystems. I would also serve as NSF’s co-lead for the Obama Administration’s BRAIN Initiative—an acknowledgement of the failed pitch in Mountain View, I guess.

October 2014: sworn in and meeting with my senior management team–now here I was, a little more than a year into BRAIN. I had gotten what I’d asked for in Mountain View. Sort of. We had the funding, we had the talent, we had review panels evaluating hundreds of proposals. But I kept thinking about the question—the one I couldn’t answer then and still struggled with now. We had built this entire apparatus for funding transformational research, yet we were asking reviewers to apply the same criteria that would have rejected Einstein’s miracle year. How do you evaluate research when you can’t articulate clear success metrics? How do you fund work that challenges fundamental assumptions when your review criteria reward preliminary data and well-defined hypotheses?

Several months later, testifying before Congress about the BRAIN project, I remember fumbling again at the direct question of when we would deliver cures for dreaded brain diseases like ALS and Schizophrenia. I punted: that was an NIH problem (even though the original pitch had been about delivering revolutionary treatments. At NSF, we were about understanding the healthy brain. In fact, how could you ever understand brain disease without a deep comprehension of the non-pathological condition?

It was a reasonable bureaucratic answer. NIH does disease; NSF does basic science. Clean jurisdictional boundaries. But sitting there in that hearing room, I realized I was falling into the same trap that had seemingly doomed our pitch in 2008: on being asked for the delivery date of a clear criterion for success, I was waffling. Only this time, I was the agent for the funder: the American taxpayer.

The truth was uncomfortable. We had launched an initiative explicitly designed to support transformational research – research that would “show us how individual brain cells and complex neural circuits interact” in ways we couldn’t yet imagine. But when it came time to evaluate proposals, we fell back on the same criteria that favored incrementalism: preliminary data, clear hypotheses, established track records, well-defined deliverables. We were asking Einstein for preliminary data on special relativity.

And we weren’t unique. This was the system. This was how peer review worked across federal science funding. We had built an elaborate apparatus designed to be fair, objective, and accountable to Congress and taxpayers. What we had built was a machine that systematically filtered out the kind of work that might transform neuroscience.

All of this was years before the “neuroscience winter”—where massive scientific misconduct was unearthed in neurodegenerative disease research—which included Alzheimer’s. But the modus operandi of BRAIN foreshadowed it.

Starting in 2022, a series of investigations revealed that some of the most influential research on Alzheimer’s disease—work that had shaped the field for nearly two decades and guided billions in research funding—was built on fabricated data. Images had been manipulated. Results had been doctored. And this work had sailed through peer review at top journals, had been cited thousands of times, and had successfully competed for grant funding year after year. The amyloid hypothesis, which this fraudulent research had bolstered, had become scientific orthodoxy not because the evidence was overwhelming, but because it fit neatly into the kind of clear, well-defined research program that review panels knew how to evaluate.

Here was the other side of the Einstein problem that I’ve mentioned in previous posts. The same system that would have rejected Einstein’s 1905 papers for lack of preliminary data and institutional support had enthusiastically funded research that looked rigorous but was fabricated. Because the fraudulent work had all the elements that peer review rewards: clear hypotheses, preliminary data, incremental progress building on established findings, well-defined success metrics. It looked like good science. It checked all the boxes.

Meanwhile, genuinely transformational work—the kind that challenges fundamental assumptions, that crosses disciplinary boundaries, that can’t provide preliminary data because the questions are too new—struggles to get funded. Not because reviewers are incompetent or malicious, but because we’ve built a system that is literally optimized to make these mistakes. We’ve created an apparatus that rewards the appearance of rigor over actual discovery, that favors consensus over challenge, that funds incrementalism and filters out transformation.

So, what’s the real function of peer review? It’s supposed to be about identifying transformative research, but I don’t think that the real purpose. To my mind, the real purpose of peer review panels at NSF, the study sections at NIH, is to make inherently flawed funding decisions defensible—both to Congress and the American taxpayer. The criteria, intellectual merit, broader impacts at NSF, make awarding grant dollars auditable and fair seeming, not because they identify breakthrough work.

But honestly, there’s a real dilemma here: if you gave out NSF’s annual budget based on a program officer’s feeling that “this seems promising”, you’d face legitimate questions about cronyism, waste and arbitrary decision-making. The current system’s flaws aren’t bad policy accidents; they are the price we pay for other values we also care about.

So, did the BRAIN Initiative deliver on that pitch I made in Mountain View in 2008? Did we figure out how ‘mind’ emerges from ‘brains’? In retrospect, I remain super impressed by NSF’s  NeuroNex program: we got impressive technology – better ways to record from more neurons, new imaging techniques, sophisticated tools. We trained a generation of neuroscientists. But that foundational question – the one that made the political case, the one that justified the investment – we’re not meaningfully closer to answering it. We made incremental progress on questions we already knew how to ask. Which is exactly what peer review is designed to deliver. Oh, and one other thing that was produced: NIH’s parent agency, the Department of Health and Human Services,  got a trademark issued on the name of the initiative itself, BRAIN.

I spent four years as NSF’s co-lead on BRAIN trying to make transformational neuroscience happen within this system. I believed in it. I still believe in federal science funding. But I’ve stopped pretending the tension doesn’t exist. The very structure that makes BRAIN funding defensible to Congress made the transformational science we promised nearly impossible to deliver.

That failed pitch at Google’s headquarters in 2008. Turns out that the question was spot on we just never answered it.

Why Transformational Science Can’t Get Funded: The Einstein Problem

Proposal declined. Insufficient institutional support. No preliminary data. Applicant lacks relevant expertise—they work in a patent office, not a research laboratory. The proposed research is too speculative and challenges well-established physical laws without adequate justification. The principal investigator is 26 years old and has no prior experience in physics.

This would have been the fate of Albert Einstein in 1905, had the NSF existed as it does today. Even with grant calls requesting ‘transformative ideas,’ an Einstein proposal would have been rejected outright. And yet, that year 1905 has been called Einstein’s miracle year. Yes, he was a patent clerk working in Bern, Switzerland, without a university affiliation. He had neither access to a laboratory nor equipment. He worked in isolation on evenings and weekends and was unknown in the physics community. Yet, despite those disadvantages, he produced four revolutionary papers on the Photoelectric Effect, Brownian motion, Special Relativity, and the famous E=mc2 energy-mass equivalence.

Taken as a whole, the work was purely theoretical. There were no preliminary data. The papers challenged fundamental assumptions of the field and, as such, were highly speculative and definitively high-risk. There were no broader impacts because there were no immediate practical applications. And the work was inherently multidisciplinary, bridging mechanics, optics, and thermodynamics. Yet, the work was transformative. By modern grant standards, Einstein’s work failed every criterion.

The Modern Grant Application – A Thought Experiment

Let’s imagine Einstein’s 1905 work packaged as a current NSF proposal. What would it look like, and how would it fare in peer review?

Einstein’s Hypothetical NSF Proposal

Project Title: Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light

Principal Investigator: Albert Einstein, Technical Expert Third Class, Swiss Federal Patent Office

Institution: None (individual applicant)

Requested Duration: 3 years

Budget: $150,000 (minimal – just salary support and travel to one conference)

Project Summary

This proposal challenges the fundamental assumptions underlying Newtonian mechanics and Maxwell’s electromagnetic theory. I propose that space and time are not absolute but relative, dependent on the observer’s state of motion. This requires abandoning the concept of the luminiferous ether and reconceptualizing the relationship between matter and energy. The work will be entirely theoretical, relying on thought experiments and mathematical derivation to establish a new framework for understanding physical reality.

How NSF Review Panels Would Evaluate This

Intellectual Merit: Poor

Criterion: Does the proposed activity advance knowledge and understanding?

Panel Assessment: The proposal makes extraordinary claims without adequate preliminary data. The applicant asserts that Newtonian mechanics—the foundation of physics for over 200 years—requires fundamental revision yet provides no experimental evidence supporting this radical departure.

Specific Concerns:

Lack of Preliminary Results: The proposal contains no preliminary data demonstrating the feasibility of the approach. There are no prior publications by the applicant in peer-reviewed physics journals. The applicant references his own unpublished manuscripts, which cannot be evaluated.

Methodology Insufficient: The proposed “thought experiments” do not constitute rigorous scientific methodology. How will hypotheses be tested? What experimental validation is planned? The proposal describes mathematical derivations but provides no pathway to empirical verification. Without experimental confirmation, these remain untestable speculations.

Contradicts Established Science: The proposal challenges Newton’s laws of motion and the existence of the luminiferous ether—concepts supported by centuries of successful physics. While scientific progress requires questioning assumptions, such fundamental challenges require extraordinary evidence. The applicant provides none.

Lack of Expertise: The PI works at a patent office and has no formal research position. He has no advisor supporting this work, no collaborators at research institutions, and no track record in theoretical physics. His biosketch lists a doctorate from the University of Zurich but no subsequent research appointments or publications in relevant areas.

Representative Reviewer Comments:

Reviewer 1: “While the mathematical treatment shows some sophistication, the fundamental premise—that simultaneity is relative—contradicts basic physical intuition and has no experimental support. The proposal reads more like philosophy than physics.”

Reviewer 2: “The applicant’s treatment of the photoelectric effect proposes that light behaves as discrete particles, directly contradicting Maxwell’s well-established wave theory. This is not innovation; it’s contradiction without justification.”

Reviewer 3: “I appreciate the applicant’s ambition, but this proposal is not ready for funding. I recommend the PI establish himself at a research institution, publish preliminary findings, and gather experimental evidence before requesting support for such speculative work. Perhaps a collaboration with experimentalists at a major university would strengthen future submissions.”

Broader Impacts: Very Poor

Criterion: Does the proposed activity benefit society and achieve specific societal outcomes?

Panel Assessment: The proposal fails to articulate any concrete broader impacts. The work is purely theoretical with no clear pathway to societal benefit.

Specific Concerns:

No Clear Applications: The proposal does not explain how reconceptualizing space and time would benefit society. What problems would this solve? What technologies would it enable? The PI suggests the work is “fundamental” but provides no examples of potential applications.

No Educational Component: There is no plan for training students or postdocs. The PI works alone at a patent office, with no access to students and no institutional infrastructure for education and training.

No Outreach Plan: The proposal includes no activities to communicate findings to the public or policymakers. There is no plan for broader dissemination beyond potential publication in physics journals.

Questionable Impact Timeline: Even if the proposed theories are correct, the proposal provides no timeline for practical applications. How long until these ideas translate into societal benefit? The proposal is silent on this critical question.

Representative Reviewer Comments:

Reviewer 1: “The broader impacts section is essentially non-existent. The PI states that ‘fundamental understanding of nature has intrinsic value,’ but this does not meet NSF’s requirement for concrete societal outcomes.”

Reviewer 2: “I cannot envision how this work, even if successful, would lead to practical applications within a reasonable timeframe. The proposal needs to articulate a clear pathway from theory to impact.”

Reviewer 3: “NSF has limited resources and must prioritize research with demonstrable benefits to society. This proposal does not make that case.”

Panel Summary and Recommendation

Intellectual Merit Rating: Poor
Broader Impacts Rating: Very Poor

Overall Assessment: While the panel appreciates the PI’s creativity and mathematical ability, the proposal is highly speculative, lacks preliminary data, contradicts established physical laws without sufficient justification, and fails to articulate broader impacts. The PI’s lack of institutional affiliation and research track record raises concerns about feasibility.

The panel notes that the PI appears talented and encourages resubmission after:

  1. Establishing an independent position at a research institution
  2. Publishing preliminary findings in peer-reviewed journals
  3. Developing collaborations with experimental physicists
  4. Articulating a clearer pathway to practical applications
  5. Demonstrating broader impacts through education and outreach

Recommendation: Decline

Panel Consensus: Not competitive for funding in the current cycle. The proposal would need substantial revision and preliminary results before it could be considered favorably.

The Summary Statement Einstein Would Receive

Dear Dr. Einstein,

Thank you for your submission to the National Science Foundation. Unfortunately, your proposal, “Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light,” was not recommended for funding.

The panel recognized your ambition and mathematical capabilities but identified several concerns that prevented a favorable recommendation:

– Lack of preliminary data supporting the feasibility of your approach – Insufficient experimental validation of your theoretical claims
– Absence of institutional support and research infrastructure – Inadequate articulation of broader impacts and societal benefits

We encourage you to address these concerns and consider resubmission in a future cycle. You may wish to establish collaborations with experimentalists and develop a clearer pathway from theory to application.

We appreciate your interest in NSF funding and wish you success in your future endeavors.

Sincerely,
NSF Program Officer

And that would be it. Einstein’s miracle year—four papers that transformed physics and laid the groundwork for quantum mechanics, nuclear energy, GPS satellites, and our modern understanding of the cosmos—would have died in peer review, never funded, never attempted.

The system would have protected us from wasting taxpayer dollars on such speculation. It would have worked exactly as designed.

The Preliminary Data Paradox

The contemporary scientific grant review process implicitly expects foundational work in transformative science to present preliminary data, despite knowing that truly groundbreaking ideas often do not originate from such tangible evidence but instead evolves through thought experiments and mathematical derivations, as Einstein did. This unrealistic expectation stifles innovation at its core – the process essentially forces researchers like Einstein to abandon pure theoretical exploration and confine them to a narrow experimental framework, where they cannot freely challenge existing paradigms, even when their work holds no immediate empirical validation yet promises to revolutionize our understanding fundamentally.

The Risk-Aversion Problem

Often, in grant reviews, I see a very junior reviewer criticize work as being too risky—dooming the proposal to failure—while simultaneously sensing their admiration for the promise and transformative nature of the work. The conservative nature and risk-averse mentality of modern grant review panels are deeply rooted in the scientific community’s culture that values incremental advances over speculative leaps – a bias born from career motivations wherein funding decisions can make or break one’s professional trajectory. Reviewers often exhibit reluctance to invest support into proposals like Einstein’s, as they pose potential controversy and may not align with personal research interests due to the associated risks of failure – a reflection of how science has traditionally evolved through evolutionary rather than revolutionary processes within academic institutions.

The Credentials Catch-22

To secure funding in today’s scientific landscape, one often needs institutional affiliation and an impressive publication record that reflects strong research credentials – a catch-22 scenario wherein groundbreaking innovators with no formal backing or prior experience find it challenging to gain the trust of reviewers. This requirement discriminates against fresh perspectives from individuals such as Einstein, who was working outside established institutions and did not have access to mentorship, which is typically deemed necessary for academic recognition – a stark contrast in how transformative outsider thinkers with unconventional backgrounds historically nurtured science.

The Short-Term Timeline Problem

Einstein developed special relativity over years with no milestones, no quarterly reports, no renewals. How would he answer, ‘What will you accomplish in Year 2?” The funding cycle durations set forth by major grant agencies, such as NSF’s typical three to five years for regular grants and the NIH’s maximum of five years, do not accommodate the long periods necessary for fully developing foundational theories that require time-intensive evolution. Such timelines impose an unfair constraint on researchers like Einstein, whose transformative ideas did not evolve within strict milestones but unfolded in an unconstrained fashion – showcasing the incompatibility of this model with truly revolutionary scientific discoveries where a linear progression is unrealistic and even counterproductive.

The Impact Statement Trap

Requirements for demonstrating immediate “broader impacts” or societal benefits pose significant obstacles to transformative research proposals that often envision far-reaching implications beyond their direct applications – an aspect Einstein’s work exemplifies best with its foundational role in advancing physics. The trap lies when reviewers, fearing potential misuse of speculative science or unable to perceive future benefits due to cognitive biases, force research proposals into a mold where immediate practical impact takes precedence over visionary scientific contributions, further marginalizing transformative studies that could potentially unlock new dimensions in various fields.

The Interdisciplinary Gap

The inherent disciplinarity of current grant funding schemes disconnects them from the interdisciplinary essence required for revolutionary research proposals like Einstein’s – a reality where transformative work frequently transcends conventional academic boundaries by merging concepts across multiple fields. This approach often results in an exclusion not only based on institutional affiliation but also because of its challenge to compartmentalized funding models that struggle with the non-linear, cross-disciplinary nature integral to truly transformative science – a significant obstacle for proposals inherently interdisciplinary yet unable to fit neatly within program structures or expertise.

The hypothetical funding scenarios for transformational science, as presented through the lens of Albert Einstein’s groundbreaking work, illustrate the inherent challenges faced by revolutionary ideas. To further highlight this problem, let’s take a look at other seminal discoveries that may have been overlooked or deemed unworthy of support under current grant review criteria:

Copernicus’ Heliocentric Model: In a contemporary setting, Copernicus’ heliocentric model might face skepticism due to its challenge to the widely accepted geocentric view of the universe. Lacking preliminary data and facing resistance from established religious beliefs, his proposal would likely be rejected under modern grant review criteria, despite its ultimate validation through observation and mathematical proof.

Gregor Mendel’s Pea Plant Experiments: The foundation of modern genetics was laid by Mendel’s pea plant experiments, yet his work remained largely unnoticed for decades after its initial publication. A grant reviewer in 1863 would likely have dismissed Mendel’s findings as too speculative and without immediate practical applications, thereby overlooking the fundamental insights he provided about heredity and genetic inheritance.

mRNA Vaccines: Katalin Karikó spent decades struggling to fund mRNA therapeutic research. Too risky. Too speculative. No clear applications. Penn demoted her. NIH rejected her grants. Reviewers wanted proof that mRNA could work as a therapeutic platform, but without funding, she couldn’t generate that proof. Then COVID-19 hit, and mRNA vaccines saved millions of lives. The technology that couldn’t get funded became one of the most important medical breakthroughs of the century.

Why does all of this matter now? First, the evidence is mounting that American science is at an inflection point. The rate of truly disruptive discoveries—those that reshape fields rather than incrementally advance them—has been declining for decades, even as scientific output has grown. Both NSF and NIH leadership recognize this troubling trend.

This innovation crisis manifests in the problems we cannot solve. Cancer and Alzheimer’s have resisted decades of intensive research. AI alignment and safety remain fundamentally unsolved as we deploy increasingly powerful systems. We haven’t returned to the moon in over 50 years. In my own field of neuroscience, incremental progress has failed to produce treatments for the diseases that devastate millions of families.

These failures point to a deeper problem: we’ve optimized our funding system for incremental advances, not transformational breakthroughs. Making matters worse, we’re losing ground internationally. China’s funding models allow longer timelines and embrace higher risk. European ERC grants support more adventurous research. Many of our best researchers now weigh opportunities overseas or in industry, where they can pursue riskier ideas with greater freedom.

What Needs to Change

Fixing this requires fundamental changes at multiple levels—from how we structure programs to how we evaluate proposals to how we support unconventional researchers.

Create separate funding streams for high-risk research. NSF and NIH need more programs that emulate DARPA’s high-risk, high-reward model. These programs should be insulated from traditional grant review: no preliminary data required, longer timelines (10+ years), and peer review conducted by scientists who have themselves taken major risks and succeeded. I propose that 10 percent of each agency’s budget be set aside for “Einstein Grants”—awards that take the view opposite the status quo. Judge proposals on originality and potential impact, not feasibility and preliminary data. Accept that most will fail, but the few that succeed will be transformational.

Protect exploratory research within traditional programs. Even standard grant programs should allow pivots when researchers discover unexpected directions. We should fund people with track records of insight, not just projects with detailed timelines. Judge proposals on the quality of thinking, not the completeness of deliverables.

Reform peer review processes. The current system needs three critical changes. First, separate review tracks for incremental versus transformational proposals—they require fundamentally different evaluation criteria. Second, don’t let a single negative review kill bold ideas; if three reviewers are enthusiastic and one is skeptical, fund it. Third, value originality over feasibility. The most transformational ideas often sound impossible until someone proves otherwise.

Support alternative career paths. We should fund more researchers outside traditional academic institutions and recognize that the best science doesn’t always emerge from R1 universities. Explicitly value interdisciplinary training and create flexible career paths that don’t punish researchers who take time to develop unconventional ideas. Track where our most creative researchers go when they leave academia—if we’re consistently losing them to industry or foreign institutions, that’s a failure signal we must heed.

Acknowledge the challenge ahead. These reforms require sustained political will across multiple administrations and consistent support from Congress. They demand patience—accepting that transformational breakthroughs can’t be scheduled or guaranteed. But the alternative is clear: we continue optimizing for incremental progress while the fundamental problems remain unsolved and our international competitors embrace the risk we’ve abandoned.

The choice before us is stark. We can optimize the current system for productivity—incremental papers, measurable progress—or we can create space for transformative discovery. We cannot have both with the same funding mechanisms.

The cost of inaction is clear: we will miss the next Einstein, fall further behind in fundamental discovery, watch science become a bureaucratic exercise, and lose what made American science into a powerhouse of discovery.

This requires action at every level. Scientists must advocate for reform and be willing to champion risky proposals. Program officers must have the courage to fund work that reviewers call too speculative. Policymakers must create new funding models and resist the temptation to demand near-term results. The public must understand that breakthrough science looks different from incremental progress—it’s messy, unpredictable, and often wrong before it’s right.

In 1905, Einstein changed our understanding of the universe while working in a patent office with no grant funding. Today, our funding system would never have let him try. We need to fix that.

Jasons ordered to close up shop

This is an interesting development. The Jasons Group is an elite cadre of academics who have conducted research studies for the DOD on a variety of topics over the last 60 years or so. More recently NSF has been interested in hiring the Jasons to look at the increasingly challenging climate for international collaborations between US scientists and their foreign counterparts (something that I have written a bit about). Now this news, that the Jasons contract with DOD is to be terminated. Given the views of the Administration on international collaborations of any kind, I wonder if the two things are related?

Mid-term election: science implications I

Most of the results of the mid-term election are now in and can be reviewed on-line. Jeff Mervis at SCIENCE has a nice summer of what the changes in the House mean, here. My own sense is that with Eddie Bernice Johnson (D-TX) as the likely chair for House Science, the tenor of that Committee’s relationship with the non-biomedical US Science R&D agencies is going to improve significantly. Specifically with regards to Climate Change, and more generally with regards to a less adversarial oversight role. I think that’s probably a good thing.

NASA and probably also NSF lost a key advocate in John Culberson (R-TX) as chair of CJS, the appropriations committee responsible for the two agencies. On the other hand, NASA will probably be able to finesse the timing of when they send a probe to Europa and NSF’s contacts with Chinese science may be a bit less fettered (although the one from the White House is still pretty hawkish).

Barbara Comstock’s loss in Virginia is complex. While she could be a thorn in the side of NSF (e.g. NEON), she was extremely supportive of the DC metro area federal workforce and this benefited science agencies who depend on expert staff to keep the wheels moving.

My sense is that NIH is still coming out of this smelling like a rose. A more conservative senate may put the brakes on some hot-button research topics, but in general, I am pretty  optimistic about the biomedical sector.

 

One proposal per year…

I’m hearing a lot about NSF BIO’s new policy of one proposal per year for each Principal Investigator. In general, I’m hearing complaints from more senior investigators and positive interest from younger ones. This is somewhat counter-intuitive for me since I’d expect junior PI’s to be quite anxious to get as many proposals as possible in within the time window of their tenure clock. But I suppose they also see this new policy as potentially reducing the competition from the old fogies (an aside, this is the same logic of those who rejoice when NSF or NIH have funding downturns because they see those as driving out the competition).

In any case, I’m agnostic about this. It is certainly good that NSF is discouraging the recycling of proposal failures. I find it annoying that I can only be PI on one proposal for the coming year–although it will incentivize me to make it as excellent as possible. I do think that the rather negative report on this new policy in SCIENCE was insufficiently nuanced and would be happy to discuss with the reporter.

The latest from NEON

NEON, the National Ecological Observatory Network, is a major research instrumentation asset that the NSF has built for scientists investigating how the environment and ecosystems interact at a continental scale. Here is the latestIMG_1104.jpg from Observatory Director and Chief Scientist, Sharon Collinge. It’s really good to see that this project is coming to a successful fruition.

There’s no photo credit on the image because it’s my photo. I took it at the NEON tower at Harvard Forest in central Massachusetts. Among many data products being produced, one of the most exciting are carbon flux measurements using the eddy-flux methodology. These are important because they provide a window into an ecosystem as it essentially breathes, just like we do. And that has enormous implications for climate change.

The location of this particular NEON tower (one of many across the United States) is particularly interesting because there is also a very long time series (25 years or so) of such measurements produced by the Ameriflux Network. If NEON can take advantage of such older measurements in a way that calibrates rigorously between the two systems, the power of continental scale (3-dimensions) will be enriched by a fourth dimension, time.