Three Things Aviation Teaches Us About Science Funding

A trip to Long Beach Airport reveals something deep about policy

The LA Uber driver let me off at the small passenger terminal at Long Beach Airport, and I had to do some serious trial and error with Google Maps to find the old Douglas Aircraft hangar where JetZero had set up shop with the admirable goal of completely disrupting the commercial aviation market by building a wide-body blended wing aircraft that would carry a 787 Dreamliner load of passengers across the country for half the fuel cost.

The hangar was open to the air, the ramp and runway fully active, yet the ethos inside was pure early-2000s Google—when anything seemed possible. The enormous space was filled with a full-size cabin mock-up, engineers at workstations, cinema-size screens streaming CAD imagery of the new plane sporting various well-known airline liveries, and a collection of flying scale model drones. The plane itself looked like it had flown off a science fiction set.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

The engineering team was equally striking: veterans from Boeing, Embraer, and McDonnell Douglas, each bringing decades of experience from very different aviation cultures. I met one of the chief designers—the inventor of sharklets, those forked wingtips that reduce drag and improve fuel efficiency, now ubiquitous on some commercial aircraft. Another engineer had come from Embraer, where he’d designed the popular 2×2 cabin configuration that passengers overwhelmingly prefer on narrow-body aircraft. Now he was tackling the challenge of designing a completely new kind of airplane cabin that would maximize comfort in a blended wing configuration.

These engineers had learned their craft in established organizations with very different approaches to decision-making, risk assessment, and innovation. The wave of consolidation in aviation—most notably Boeing’s merger with McDonnell Douglas and its subsequent shift from an engineer-driven culture to one focused on shareholder returns—had left many veteran engineers looking for something different. The 737 MAX crisis highlighted how far Boeing had drifted from its engineering roots. JetZero represented a chance to get back to what they loved: solving hard technical problems without the constraints of quarterly earnings calls and legacy infrastructure.

They were attempting something none of their former employers would touch: a radical departure from the tube-and-wing design that has dominated commercial aviation for seventy years. This raised a question that goes far beyond aircraft design: Why can radical innovation happen at a startup like JetZero but not at Boeing, Airbus, or Embraer?

This isn’t just about airplanes. It’s about how organizations—whether aircraft manufacturers or science funding agencies—decide what’s worth building, who gets to decide, and how they balance proven approaches against risky bets. Aviation and science funding face the same fundamental challenge: how to organize technical innovation.

Studying how Boeing, Airbus, and Embraer make these decisions has revealed patterns that apply directly to science funding. Here are three lessons from aviation that illuminate how research gets funded—and why some innovations happen while others never get off the ground.

Lesson 1: How Organizations Assess and Manage Technical Risk

The Aviation Pattern

Boeing, in its traditional engineer-driven culture, approached risk through data and testing. Engineers made decisions based on technical feasibility. They’d prove something worked, then seek regulatory approval. The 787 Dreamliner exemplified this: Boeing pushed carbon-composite technology to unprecedented levels while keeping the basic configuration conventional. The cultural assumption: engineers know best, prove it works, get approval, move forward.

Airbus operates from a completely different framework. As a consortium involving multiple governments, labor unions, and industry stakeholders, risk assessment includes political, economic, and social factors alongside technical ones. Workers’ councils have a voice in production decisions. Safety regulators participate earlier in the design process. The A380 Superjumbo was technically conservative—four engines, conventional configuration—but represented enormous manufacturing and political risks, requiring coordination across nations. The cultural assumption: technical decisions affect many stakeholders, and all deserve input.

Embraer’s approach reflects its position as a state development tool for Brazil (the country holds a veto over control of the company’s strategic direction). They can’t compete head-to-head with Boeing and Airbus, so their risk calculus focuses on market positioning. Find niches, develop partnerships, move quickly. The E-Jet family succeeded by targeting the underserved regional market. The cultural assumption: innovation means finding white space in a market dominated by established players.

Same engineering principles. Same physics. The same goal of building safe, efficient aircraft. But fundamentally different risk assessment frameworks.

The Parallel to Science Funding

The American system, through NSF and NIH, operates remarkably like Boeing’s traditional approach. Peer review is engineer-driven decision-making translated to science. Data—preliminary results, track record—drives decisions. The central question reviewers ask is Boeing’s question: “Can this PI deliver with taxpayer money?” Merit review happens after the proposal is submitted. The system rewards incremental progress from established investigators, just as Boeing refined the 737 through successive iterations.

European research funding embeds more stakeholder involvement. Horizon Europe’s missions approach brings policymakers, industry representatives, and public voices into the priority-setting process. Risk assessment explicitly includes societal benefit and economic impact. Clinical translation gets emphasized earlier in the research pipeline. Scientists remain central but aren’t the sole decision-makers.

Emerging science powers like China take yet another approach. Strategic national priorities drive funding decisions. The question isn’t “What’s the best science?” but “Where can we compete globally?” This enables leapfrog strategies: massive focused investments in AI, quantum computing, and biotechnology designed to establish leadership in emerging fields rather than catching up in established ones. This top-down approach is now also emerging within the US science ecosystem.

For researchers, understanding which risk framework you’re operating in helps you frame proposals effectively. The American system rewards demonstrated competence and incremental progress. Other systems may value societal impact, strategic positioning, or rapid deployment. Neither approach is better or worse—they reflect different cultural assumptions about how to allocate risk in technical innovation.

Lesson 2: Who Gets to Decide What Gets Built

The Aviation Pattern

At Boeing, engineers and program managers traditionally drove major decisions. Shareholders and the board provided financial constraints. Airlines shaped requirements. But core technical choices were the engineers’ responsibility. This produced technically sophisticated aircraft, sometimes disconnected from market realities. The 747-8 (the last of the classic jumbo jets’ instantiations), for instance, was an engineer’s dream—but the market for it was lukewarm.

Airbus engages multiple stakeholders from day one. National governments in France, Germany, the UK, and Spain have seats at the table. Workers’ councils negotiate production methods. Industry partners across Europe collaborate on components. Customers get involved earlier. The result is more consensus-driven and sometimes slower, but with broader buy-in. The A350’s long development process reflected extensive consultation but yielded strong market acceptance.

Embraer’s alignment with Brazil’s government development goals sets direction, but the company maintains a partnership model with established players and responds quickly to market signals. Less hierarchical decision-making enables nimble adaptation. The attempted Embraer-Boeing partnership that ultimately fell apart illustrated starkly different decision-making speeds between the two companies.

JetZero represents something different entirely. A small team iterates rapidly. Engineers from different aviation cultures bring different assumptions. Venture capital’s risk tolerance differs fundamentally from corporate risk aversion. They can attempt radical innovation precisely because they’re not constrained by established stakeholder expectations or legacy infrastructure.

The Parallel to Science Funding

American peer review puts scientists in the decision-making seat. On its face, this seems ideal: who better to judge scientific merit than other scientists? But peer review favors known researchers using proven methods. Peers can become conservative gatekeepers. The result is high quality and incremental progress, but potentially missed breakthroughs.

European models bring more voices into the room. The European Research Council maintains scientific independence but operates within frameworks emphasizing societal missions and grand challenges. Policymakers, industry representatives, and public stakeholders help set priorities. Scientists remain central but aren’t the sole arbiters. This creates stronger connections to societal needs, though sometimes at the cost of researcher autonomy.

Directed research models flip the equation. Governments or funding agencies set priorities; researchers respond to calls for proposals. This is top-down rather than bottom-up. The advantage is alignment with national priorities. The risk is missing unexpected discoveries that don’t fit predetermined categories.

I’ve seen these differences firsthand, reviewing for both American and international funding agencies. The questions panels ask reveal cultural assumptions about whose judgment matters. American panels debate scientific rigor and PI capability. International panels I’ve participated in spend more time on broadening participation and strategic fit with national priorities.

For researchers, understanding who has a voice in funding decisions is crucial for navigating the system. American researchers working internationally need to recognize that peer review isn’t universal—other countries organize scientific decision-making to reflect different values about expertise, accountability, and public benefit.

Lesson 3: The Tension Between Incremental Improvement and Radical Innovation

The Aviation Pattern

Established aircraft manufacturers favor incremental improvement for sound reasons. The tube-and-wing design has been refined for seventy years. Every iteration builds on accumulated knowledge. Existing manufacturing facilities, pilot training programs, maintenance infrastructure, and regulatory pathways all assume this configuration. Airlines understand the operating economics. Risk is manageable, returns are predictable. The 737 MAX—an incremental update to a 1960s design—still makes economic sense despite its troubles.

JetZero’s blended wing body has been studied since the 1940s. Its technical advantages are clear: dramatic improvements in fuel efficiency, reduced noise, and potential for entirely new cabin configurations. But it requires new manufacturing processes, new pilot training, and new regulatory frameworks. The risk isn’t primarily technical—it’s organizational and systemic. There’s no clear path from prototype to profitable, scalable production. Established players, accountable to shareholders and constrained by quarterly earnings expectations, can’t justify the investment.

Startups like JetZero can attempt radical innovation because they have no legacy infrastructure to protect. They can accept higher technical risk. The venture capital model tolerates failure in ways public corporations cannot. They don’t need to satisfy existing stakeholders or worry about cannibalizing current product lines. They can focus on long-term disruption rather than next quarter’s earnings.

But we should be clear: most aviation innovation is incremental for good reason. Lives depend on safety. Capital requirements are enormous. Development timelines span 10-15 years. Regulatory burden is intense. Incremental improvement has delivered extraordinary gains—modern aircraft are unimaginably more efficient, safe, and capable than those of fifty years ago.

The Parallel to Science Funding

Science funding faces the same tension. Established PIs using proven methods dominate for sound reasons. Track records reduce risk. Incremental progress is predictable, publishable, and fundable. Infrastructure investments favor established approaches—if your university has a state-of-the-art imaging facility, proposals that use it have an advantage. Peer reviewers understand and can evaluate proven methods. The “preliminary data” requirement inherently favors ongoing work over genuinely new directions. The system is designed to minimize taxpayer waste through careful risk management.

Truly novel approaches struggle in this environment. High-risk/high-reward programs exist but represent a tiny fraction of overall funding. Early career investigators face a chicken-and-egg problem: “How will you do this?” reviewers ask, but gathering preliminary data requires resources they don’t yet have. Reviewers are more comfortable funding known quantities. Paradigm shifts are rare and unpredictable—there’s no clear “return on investment” for genuinely radical ideas.

Consider the BRAIN Initiative. The vision was bold: transform neuroscience through new technologies and approaches. But implementation favored established neuroscientists with proven track records. The system worked as designed: minimizing risk by funding demonstrated competence. As I’ve written earlier, BRAIN fell short in its delivery goals: curing brain diseases. ARPA-H was explicitly created to escape the incremental trap, but it’s still finding its model. The European Research Council’s advanced grants show somewhat higher tolerance for risk, but even there, track record matters enormously.

For researchers pursuing truly novel approaches, it’s crucial to understand you’re working against system design, not just reviewer bias. The system is optimized for reliable incremental progress, not moonshots. Radical innovation in science, like radical innovation in aviation, may require different funding models—something more like venture capital, tolerant of high failure rates in pursuit of occasional transformative breakthroughs.

This raises a deeper question: Should science funding favor incremental or radical innovation? Or do we need both, in different proportions? Aviation supports both Boeing’s incremental refinements and JetZero’s radical rethinking. Should science funding do the same—and if so, in what balance?

What This Means for Science Policy

These aviation patterns reveal a fundamental feature of how societies organize technical innovation. The choices Boeing, Airbus, and Embraer make about risk assessment, decision-making authority, and the balance between incremental and radical innovation aren’t purely business decisions. They’re cultural choices embedded in what Sheila Jasanoff calls civic epistemologies—different assumptions about how knowledge should be produced, who should decide, and what goals matter most.

American science funding has historically reflected American cultural values: individual merit and achievement drive peer review by scientific peers. Data-driven decision-making shows up in preliminary data requirements. Risk minimization operates through proven track records. Incremental progress represents the reliable path. This isn’t accidental—it’s deeply cultural.

Other countries organize differently because they value different things. European systems emphasize societal benefit and stakeholder input. Asian systems prioritize strategic national development goals. Different countries strike different balances between discovery and application, between researcher autonomy and national priorities, between tolerance for failure and demands for accountability.

For all researchers, understanding these cultural patterns helps you work more effectively within the system. Know what the system optimizes for—reliable incremental progress from established investigators. If you’re pursuing radical innovation, recognize you’re working against the grain. International collaborations require understanding that your partners may operate within fundamentally different funding cultures with different assumptions about what science is for and how it should be organized.

For science policy, we should be explicit about what our funding systems optimize for. There’s no “best” system—only different tradeoffs reflecting different values. Maybe we need multiple models, as aviation has both Boeing and JetZero. Comparing systems reveals assumptions we don’t normally question.

In future posts, I’ll explore specific country comparisons: How does the European Research Council actually work? What can we learn from how other countries fund AI research? How do different countries handle the tension between researcher autonomy and national priorities?

A Final Thought

Visiting JetZero and seeing engineers from Boeing, Embraer, and McDonnell Douglas collaborate on something radical that couldn’t happen within their former companies crystallized something I’d been observing in science policy work: innovation doesn’t just require good ideas and talented people. It requires organizational structures and cultural assumptions that allow certain kinds of ideas to be pursued.

The JetZero engineers didn’t suddenly become more creative or capable. They remained the same engineers who’d designed sharklets at Boeing or cabin configurations at Embraer. What changed was the organizational context—the risk tolerance, decision-making authority, and freedom from legacy constraints. That shift in context enabled them to attempt what had been impossible in their former roles.

Science funding works the same way. Researchers operating within NSF’s peer review system are no less creative than those pursuing radical ideas through ARPA or venture-backed biotechs. But the organizational context shapes which ideas can be pursued and which innovations are possible.

Understanding how different countries organize technical innovation—whether building aircraft or funding research—helps us see our own system more clearly. And maybe, just maybe, it helps us imagine how we might do things differently.

What examples have you seen where organizational culture shaped what research got pursued? Have you experienced different funding cultures working internationally? Share in the comments.

A Grant Reviewer’s New Year Advice to Proposers: What I’d Tell My Younger Self

Happy New Year! Below is Science Policy Insider’s first posting of 2026:

We were reviewing a proposal that included gorgeous preliminary data and confocal microscopy images from what was, at the time, cutting-edge: two-channel laser-scanning technology. Because the images were both crisply in focus and colored in green and red to reflect the locations of different sub-cellular fluorescent molecular probes, it felt as if this was an extraordinary grant proposal, based on the images themselves, never mind the fact that there was no working hypothesis nor technical consideration of a phenomenon called autofluorescence, where the biomolecules of the cell itself produce their own signal that can be confused with the signals coming from the two probes.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

The panel discussion revealed the problem. Some reviewers were ready to fund based solely on the images. Others raised the autofluorescence issue, the missing hypothesis. But even the skeptics prefaced their concerns with “The data are beautiful, but…” Those pictures had done their job—they made weak science look compelling.

That’s when I learned: awesome preliminary data can cloud objectivity. After reviewing thousands of grants at NIH and NSF over three decades, I’ve seen it happen repeatedly.

So, as you plan your 2026 submissions, here’s what I wish I’d known from the start—lessons that might save you the same learning curve.

Lesson 1: Clarity Beats Cleverness

In my early days, I thought impressive vocabulary and complex sentences demonstrated sophistication. Surely reviewers would appreciate nuanced, academic writing that showcased the full complexity of my thinking. I was wrong.

Clarity wins every time. Reviewers are overwhelmed, often reading up to 15 proposals a week while managing their own labs, teaching loads, and grant deadlines. Simple, direct writing isn’t dumbing down your science—it’s respecting your reviewers’ cognitive bandwidth and making your research accessible to the non-specialists who might be reading it.

Several decades ago, I learned the “grandmother” test. If you can’t explain your research clearly and simply to someone outside your immediate field (like maybe your grandmom), it’s probably not clear enough for a review panel where only one or two people are genuine specialists in your exact area.

Here’s my practical advice: Read your overall proposal goals (or aims) out loud. If you stumble over your own sentences, reviewers will too. Remember that if you can’t explain it, you probably don’t understand it well enough yourself. Make the first paragraph of each section a roadmap for what follows. And use jargon only when necessary—when there’s genuinely no more straightforward way to say it.

I once reviewed a proposal with brilliant research that was nearly incomprehensible to anyone outside the PI’s subspecialty. The same panel reviewed another proposal that explained equally complex ideas with straightforward language. Guess which one got funded?

Lesson 2: Preliminary Data Is About Trust, Not Volume

Early in my career, I believed more data equaled a stronger proposal. Fill those pages with figures! Show them everything you’ve got! Every additional graph strengthens your case, right?

Wrong. It’s the quality of the data that counts.

Here’s what preliminary data does: it answers the question “Can this PI execute what they’re proposing?” It’s not about impressing reviewers with how much you’ve already accomplished. It’s about building trust that you can deliver on your promises. And here’s the thing that surprised me most: including the wrong preliminary data raises more questions than having no data at all.

Show that you can execute the specific methods you’re proposing. Demonstrate the feasibility of your key innovation—the part that’s novel and risky. If you don’t have the correct preliminary data yet, address that gap head-on rather than papering over it with tangentially related work.

The deeper insight here is that reviewers are assessing risk. They’re not asking, “Do you have data?” They’re asking, “Do I trust you can deliver what you’re promising with taxpayer money?” Those are fundamentally different questions.

Lesson 3: Broader Impacts Require Situational Awareness

I initially treated broader impacts as a required checkbox. Standard language about societal benefits and outreach seemed perfectly adequate—everyone writes similar things, right? Just describe some plausible activities and move on.

Reviewers can spot boilerplate instantly. We’ve read hundreds of proposals with identical broader impacts sections, and they all blur together into meaningless noise.

The best broader impacts sections connect to who you are and what you’re genuinely already doing in ways that align with the nation’s best interests. Integration with your research and your actual life matters far more than ambitious plans that sound good on paper.

Scaling is essential: build on what you’re already doing rather than inventing entirely new programs you’ll never have time to implement. Be specific rather than grandiose. If you already mentor undergrads in your lab, explain how this project will train them in new techniques. If you have existing connections to a local K-12 program, describe how you’ll use them—don’t manufacture new partnerships from whole cloth.

Here’s the tell: “We will develop outreach materials” raises immediate skepticism. But “I teach a summer workshop at Lincoln High School’s science program—this research will provide three new hands-on modules on climate modeling” builds trust. One is a vague promise. The other is a concrete plan rooted in existing relationships.

Lesson 4: Budget Justification Actually Matters

I used to think budgets were purely administrative. Surely reviewers barely glanced at them—they cared about the science, not the accounting, right? Standard rates and percentages seemed perfectly sufficient.

Reviewers absolutely read budget justifications. We look for alignment between what you’re proposing to do and what you’re proposing to spend. Misalignment raises immediate red flags. And here’s something that surprised many junior faculty I’ve mentored: over-budgeting is just as problematic as under-budgeting.

Every major budget line should connect clearly to a specific aim in your proposal. Justify why you need that piece of equipment—what will it do that your existing infrastructure can’t? Personnel effort should match the work described. If you’re requesting 50% effort for a postdoc, reviewers should see that postdoc playing a central role in half your aims.

Red flags I’ve seen repeatedly: proposing ambitious international fieldwork with minimal travel budget or requesting full postdoc salary when the proposal’s narrative gives that postdoc almost nothing to do. These inconsistencies make reviewers wonder whether you’ve really thought through how the work will get done.

Lesson 5: How You Handle Weaknesses Reveals Everything

I once believed you should never acknowledge limitations. Defend every choice—project confidence at all costs. Any admission of weakness would be seized upon by reviewers looking for reasons to reject your proposal.

This might be the lesson I wish I’d learned earliest. Reviewers already see the weaknesses in your proposal. Pretending problems don’t exist destroys your credibility far more than the limitations themselves.

How you address limitations reveals your scientific maturity. Acknowledge real problems early and directly. Then explain your mitigation strategy: “If plan A fails, we will try plan B because…” Show you’ve thought through alternatives and have realistic contingency plans.

This lesson became even clearer when I started seeing resubmissions. The response letter matters as much as the revised proposal itself. A defensive tone—arguing with reviewers, insisting they misunderstood you—equals instant rejection. But a response that says “We appreciate the panel’s insights. We have substantially revised Section 2.3 to address concerns about statistical power. New preliminary data (Figure 3) demonstrates feasibility of the alternative approach” shows growth and responsiveness.

Panels respect PIs who demonstrate scientific judgment far more than those who claim perfection. We know perfect proposals don’t exist. We want to see that you can identify problems and solve them.

Lesson 6: The Human Element of Review

I believed grant review was a purely objective, data-driven process where careful reviewers gave equal attention to every proposal, systematically evaluating each against clear criteria.

Reviewers are human. They’re tired. They’re distracted. They have bad days. Panel dynamics matter—who speaks up first, who’s respected, who’s combative. Your proposal isn’t evaluated in isolation; it competes with the others in that review cycle, and comparison effects are real even if the program officers say it shouldn’t be.

Here’s the practical reality: reviewers read proposals at night, on weekends, while traveling. They’re squeezing this work into already overwhelming schedules. If they’re confused by page two, they may never fully engage with your brilliant idea on page eight. Your first page matters disproportionately.

Make your innovation immediately clear. Give reviewers ammunition to advocate for you in panel discussions—clear summary statements they can quote, compelling preliminary data they can point to. The discussant’s job is to convince other panelists to fund your work. Make their job easy.

This isn’t unfair. It’s simply reality. Design your proposal for the actual conditions of review, not the idealized version where everyone reads every word with perfect attention on a quiet Sunday morning with fresh coffee.

Lesson 7: Resubmissions Are About Demonstrating You Listened

I initially thought resubmissions were second chances to explain myself better. The reviewers had clearly misunderstood my brilliant idea. Now I’d show them what I really meant, with more precise explanations and stronger arguments.

Resubmissions are about showing scientific growth. They demonstrate whether you can receive criticism, integrate feedback, and improve your work. The reviewers weren’t wrong—or at least, whether they were wrong doesn’t matter. What matters is whether you can respond constructively to their concerns.

Start your response letter with genuine gratitude, not perfunctory politeness. Group your responses to criticisms thematically rather than addressing them line by line, which makes you look defensive. Show clearly what you changed and where reviewers can find those changes in the revised proposal. If you genuinely disagree with a criticism, do so respectfully and support your position with data, not rhetoric.

The successful resubmissions I’ve seen follow a pattern: acknowledge the feedback, explain the changes, demonstrate improvement with new evidence. The unsuccessful ones argue, defend, and explain why the reviewers didn’t understand the first time.

What These Lessons Reveal About Science Funding

These aren’t just tips for better grant writing. They reveal something more profound about how American science funding works. As I’ve written before, the current system prioritizes risk mitigation over bold ideas. It values clear communication and demonstrates competence over theoretical brilliance. It rewards incremental progress from established investigators more readily than moonshots from newcomers.

I’m not criticizing (this time)—it’s how the system is designed. When you’re allocating hundreds of millions in taxpayer dollars, trust and deliverability matter. Understanding this cultural logic helps you work within the system more effectively.

And it raises an interesting question I’m exploring in my new work on international science policy: Do other countries fund science differently because they assess risk differently? Do European or Asian funding systems embed different assumptions about what science should accomplish? That’s a topic for future posts.

These lessons came from mistakes, from failed proposals, from thousands of hours in review rooms watching good science get rejected for preventable reasons. I wish I’d understood them earlier in my career. I’m offering them now to help you avoid the same learning curve.

What hard-won lessons have you learned about grant writing? What advice would you give your younger self? Share in the comments.

As you prepare your 2026 submissions, remember there’s a human being on the other side of your proposal. Make their job easier. Help them advocate for your science. Give them reasons to say yes.

Why I’m Taking Science Policy Insider International

A View from Abroad

Mid-competition week for a panel reviewing proposals on genes and cells: the fifteen-minute clock starts, and the five of us assigned to this proposal dive in. We consider factors such as whether the proposer is early in their career and how the COVID pandemic might have affected their laboratory’s productivity. We carefully assess their plan for mentoring trainees, including their previous track record and plans. The excellence of the proposer is evaluated, not by raw bibliometric measures such as H-index, but by substantive contributions to the field. And we take a very close look at the proposal itself—not only in terms of intellectual merit, but also to make sure that it is distinct from the investigator’s other supported science. Is this an NIH study section? Nope. Is this an NSF panel? Again, no. This is a peer review for another G7 nation, to be unnamed in this post.

What struck me wasn’t that this country did peer review differently than NSF or NIH. What struck me was how similar it was. Same careful attention to mentoring. Same suspicion of bibliometrics. Same concern about overlaps with existing funding. I could have been in any panel room I’d sat in over three decades in Washington. And that’s when it hit me: among the wealthy nations that fund science, we’re all running variations on the same basic system. We argue about details – overhead rates, review criteria, funding durations – but we share fundamental assumptions about how science should work.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

Or so I thought. Until I stepped outside the world of science funding and began looking at how other countries organize technical knowledge. My second book project examines how Boeing, Airbus, and Embraer design commercial aircraft – and that research has revealed something I’d missed in all my years in government and academia.

Civic Epistemologies

The scholar Sheila Jasanoff has a concept called ‘civic epistemologies’ – the idea that different societies have fundamentally different ways of producing and validating knowledge. It’s not about organizational charts or funding mechanisms. It’s deeper than that. It’s about cultural assumptions: What questions are worth asking? What counts as evidence? Who gets to decide? How do we measure success?

When Americans design an airplane, we assume that technical decisions should be made by engineers based on data, with regulators checking compliance after the fact. Europeans embed social and labor concerns directly into the design process – workers’ councils have a say in production methods, and safety regulators are involved earlier. Brazilians organize around different assumptions entirely, shaped by their position as a developing economy entering a market dominated by established players.

Same engineering principles. Same physics. The same goal of building a safe, efficient aircraft. But fundamentally different answers to the question: Who should decide how this gets done?

I saw the same pattern as a working neuroscientist. American neuroscience tends to bet on fundamental discovery—map the circuits, understand the mechanisms, and applications would follow. Recording sea slug neurons during my training embodied this approach: study simpler systems, find conserved principles, apply them to humans. Europeans start closer to the clinic, organizing major research programs around disease categories and patient needs. Japanese neuroscience builds unusually tight links between academic labs and industry—electronics and engineering companies actively embedded in research networks, with clear paths toward commercialization: same neurons, same biology—different assumptions about how knowledge should flow from laboratory to society.

My new book project

So, where is this taking me? The short answer is I’m working on a new book about how American, European, and Brazilian cultures (think Boeing, Airbus, and Embraer) shape commercial aviation technology. Why planes? In my lifetime, I experienced firsthand the jet revolution: I started on the Comet, went on to the Pan Am 707s, and these days still enjoy the grandeur of the big twin aisle giants that connect us across oceans.

In the new book, I’m interested in comparing technical cultures through the lens of those jets (as technical artifacts). But beyond my lifetime fascination with aviation, the same questions apply to science policy itself: why do different countries organize technological knowledge differently? What can we learn from how other G7 nations fund science? And what cultural assumptions shape what gets built (airplanes OR research programs)?

Science Policy Insider Expands Its Scope

This brings me back to Science Policy Insider and where we’re headed. We are broadening our remit. In the future, we’ll expand to include a comparative analysis of research funding systems—both public agencies and private industry—drawing on insights from my aviation research. We’ll examine how different countries handle current challenges: AI governance, climate research, and research security.

On the practical side, we’ll provide insights for American researchers who work internationally or plan to—from navigating different grant systems to understanding why collaborations succeed or fail across cultural boundaries. And above all, we’ll consider what viewing American science policy from the outside reveals about our own system.

We’ll maintain our bi-weekly publishing schedule.

Science Policy Insider started with my promise to explain how American science policy really works from someone who was inside the system. Now we’re also going to explore what it looks like from the outside and what that perspective reveals about our own system.

I continue to invite readers’ questions, now not only about how things work in our own American discovery machine, but also about international science policy.

Grades posted: another semester in the books

Fall semester 2025 is now complete. Both my undergraduates and my grad students performed very well. When we begin this all again in Spring, I’ll be teaching my favorite class on Space governance, using Kim Stanley Robinson’s Mars Trilogy as our text. And we’ll be teaching our crisis management class again — this time to a new cohort of Schar School master’s students.

But for now, it’s time for the winter break. I’m 2/3 of the way through my reread of Moby Dick. And James Joyce’s Ulysses is on deck. I’ve got a ton of grant proposals headed my way to review for the Canadian NSERC to relieve the excess literary fiction.

How Will You Know You’ve Succeeded? A BRAIN story

August 2008: A summer day in Mountain View California. The previous year, In 2007, The Krasnow Institute for Advanced Study, which I was leading at George Mason University, had developed a proposal to invest tons of money in figuring out how mind emerges from brains and now I had to make the case that it deserved to be a centerpiece of a new administration’s science agenda. Three billion dollars is not a small ask, especially in the context of the 2008 financial crisis that was accelerating.

Before this moment, the project had evolved organically: a kickoff meeting at the Krasnow Institute near D.C., a joint manifesto published in Science Magazine, and then follow-on events in Des Moines, Berlin and Singapore to emphasize the broader aspects of such a large neuroscience collaboration. There even had been a radio interview with Oprah.

When I flew out to Google’s Mountain View headquarters in August 2008 for the SciFoo conference, I didn’t expect to be defending the future of neuroscience over lunch. But the individual who was running the science transition for the Obama Presidential Campaign, had summoned me for what he described as a “simple” conversation: defend our idea for investing $3 billion over the next decade in neuroscience with the audacious goal of explaining how “mind” emerges from “brains.” It was not the kind of meeting I was ready for.

I was nervous. As an institute director, I’d pitched for million-dollar checks. This was a whole new scale of fundraising for me. And though, California was my native state, I’d never gone beyond being a student body president out there. Google’s headquarters in summer of 2008 was an altar to Silicon Valley power.

SciFoo itself was still in its infancy then – the whole “unconference” concept felt radical and exciting, a fitting backdrop for pitching transformational science. But the Obama campaign wasn’t there for the unconventional meeting format. Google was a convenient meeting spot. And they wanted conventional answers.

I thought I made a compelling case: this investment could improve the lives of millions of patients with brain diseases. Neuroscience was on the verge of delivering cures. (I was wrong about that, but I believed it at the time.) The tools were ready. The knowledge was accumulating. We just needed the resources to put it all together.

Then I was asked the question that killed my pitch: “How will we know we have succeeded? What’s the equivalent of Kennedy’s moon landing – a clear milestone that tells us we’ve achieved what we set out to do?” You could see those astronauts come down the ladder of the lunar module. You could see that American flag on the moon. No such prospects with a large neuroscience initiative.

I had no answer.

I fumbled through some vague statements about understanding neural circuits and developing new therapies, but even as the words left my mouth, I knew they were inadequate. The moon landing worked as a political and scientific goal because it was binary: either we put a man on the moon or we didn’t. Either the flag was planted or it wasn’t.

But “explaining how mind emerges from brains”? When would we know we’d done that? What would success even look like?

The lunch ended politely. I flew back to DC convinced it had been an utter failure.

But that wasn’t the end of it. Five years later, at the beginning of Obama’s second presidential term, we began to hear news of a large initiative driven by the White House called the Brain Activity Map or BAM for short. The idea was to comprehensively map the functional activity of brains at high spatial and temporal resolution beyond that available at the time. It was like my original pitch both in scale (dollars) and in the notion that it was important to understand how mind emerges from brain function. The goal for the new BAM project was to be able to map between the activity and the brain’s emergent “mind”-like behavior, both in the healthy and pathological cases. But the BAM project trial balloon, even coming from the White House, was not an immediate slam dunk.

There was immediate push-back from large segments of the neuroscience community that felt excluded from BAM, but with a quick top-down recalibration from the White House Office of Science and Technology Policy and a whole of government approach that included multiple science agencies, BRAIN (Brain Research through Advancing Innovative Neurotechnologies) was born in April of 2013.

A year later, in April of 2014, I was approached to head Biological Sciences at the US National Science Foundation. When I took the job that October, I was leading a directorate with a budget of $750 million annually that supported research across the full spectrum of the life sciences – from molecular biology to ecosystems. I would also serve as NSF’s co-lead for the Obama Administration’s BRAIN Initiative—an acknowledgement of the failed pitch in Mountain View, I guess.

October 2014: sworn in and meeting with my senior management team–now here I was, a little more than a year into BRAIN. I had gotten what I’d asked for in Mountain View. Sort of. We had the funding, we had the talent, we had review panels evaluating hundreds of proposals. But I kept thinking about the question—the one I couldn’t answer then and still struggled with now. We had built this entire apparatus for funding transformational research, yet we were asking reviewers to apply the same criteria that would have rejected Einstein’s miracle year. How do you evaluate research when you can’t articulate clear success metrics? How do you fund work that challenges fundamental assumptions when your review criteria reward preliminary data and well-defined hypotheses?

Several months later, testifying before Congress about the BRAIN project, I remember fumbling again at the direct question of when we would deliver cures for dreaded brain diseases like ALS and Schizophrenia. I punted: that was an NIH problem (even though the original pitch had been about delivering revolutionary treatments. At NSF, we were about understanding the healthy brain. In fact, how could you ever understand brain disease without a deep comprehension of the non-pathological condition?

It was a reasonable bureaucratic answer. NIH does disease; NSF does basic science. Clean jurisdictional boundaries. But sitting there in that hearing room, I realized I was falling into the same trap that had seemingly doomed our pitch in 2008: on being asked for the delivery date of a clear criterion for success, I was waffling. Only this time, I was the agent for the funder: the American taxpayer.

The truth was uncomfortable. We had launched an initiative explicitly designed to support transformational research – research that would “show us how individual brain cells and complex neural circuits interact” in ways we couldn’t yet imagine. But when it came time to evaluate proposals, we fell back on the same criteria that favored incrementalism: preliminary data, clear hypotheses, established track records, well-defined deliverables. We were asking Einstein for preliminary data on special relativity.

And we weren’t unique. This was the system. This was how peer review worked across federal science funding. We had built an elaborate apparatus designed to be fair, objective, and accountable to Congress and taxpayers. What we had built was a machine that systematically filtered out the kind of work that might transform neuroscience.

All of this was years before the “neuroscience winter”—where massive scientific misconduct was unearthed in neurodegenerative disease research—which included Alzheimer’s. But the modus operandi of BRAIN foreshadowed it.

Starting in 2022, a series of investigations revealed that some of the most influential research on Alzheimer’s disease—work that had shaped the field for nearly two decades and guided billions in research funding—was built on fabricated data. Images had been manipulated. Results had been doctored. And this work had sailed through peer review at top journals, had been cited thousands of times, and had successfully competed for grant funding year after year. The amyloid hypothesis, which this fraudulent research had bolstered, had become scientific orthodoxy not because the evidence was overwhelming, but because it fit neatly into the kind of clear, well-defined research program that review panels knew how to evaluate.

Here was the other side of the Einstein problem that I’ve mentioned in previous posts. The same system that would have rejected Einstein’s 1905 papers for lack of preliminary data and institutional support had enthusiastically funded research that looked rigorous but was fabricated. Because the fraudulent work had all the elements that peer review rewards: clear hypotheses, preliminary data, incremental progress building on established findings, well-defined success metrics. It looked like good science. It checked all the boxes.

Meanwhile, genuinely transformational work—the kind that challenges fundamental assumptions, that crosses disciplinary boundaries, that can’t provide preliminary data because the questions are too new—struggles to get funded. Not because reviewers are incompetent or malicious, but because we’ve built a system that is literally optimized to make these mistakes. We’ve created an apparatus that rewards the appearance of rigor over actual discovery, that favors consensus over challenge, that funds incrementalism and filters out transformation.

So, what’s the real function of peer review? It’s supposed to be about identifying transformative research, but I don’t think that the real purpose. To my mind, the real purpose of peer review panels at NSF, the study sections at NIH, is to make inherently flawed funding decisions defensible—both to Congress and the American taxpayer. The criteria, intellectual merit, broader impacts at NSF, make awarding grant dollars auditable and fair seeming, not because they identify breakthrough work.

But honestly, there’s a real dilemma here: if you gave out NSF’s annual budget based on a program officer’s feeling that “this seems promising”, you’d face legitimate questions about cronyism, waste and arbitrary decision-making. The current system’s flaws aren’t bad policy accidents; they are the price we pay for other values we also care about.

So, did the BRAIN Initiative deliver on that pitch I made in Mountain View in 2008? Did we figure out how ‘mind’ emerges from ‘brains’? In retrospect, I remain super impressed by NSF’s  NeuroNex program: we got impressive technology – better ways to record from more neurons, new imaging techniques, sophisticated tools. We trained a generation of neuroscientists. But that foundational question – the one that made the political case, the one that justified the investment – we’re not meaningfully closer to answering it. We made incremental progress on questions we already knew how to ask. Which is exactly what peer review is designed to deliver. Oh, and one other thing that was produced: NIH’s parent agency, the Department of Health and Human Services,  got a trademark issued on the name of the initiative itself, BRAIN.

I spent four years as NSF’s co-lead on BRAIN trying to make transformational neuroscience happen within this system. I believed in it. I still believe in federal science funding. But I’ve stopped pretending the tension doesn’t exist. The very structure that makes BRAIN funding defensible to Congress made the transformational science we promised nearly impossible to deliver.

That failed pitch at Google’s headquarters in 2008. Turns out that the question was spot on we just never answered it.

Why Transformational Science Can’t Get Funded: The Einstein Problem

Proposal declined. Insufficient institutional support. No preliminary data. Applicant lacks relevant expertise—they work in a patent office, not a research laboratory. The proposed research is too speculative and challenges well-established physical laws without adequate justification. The principal investigator is 26 years old and has no prior experience in physics.

This would have been the fate of Albert Einstein in 1905, had the NSF existed as it does today. Even with grant calls requesting ‘transformative ideas,’ an Einstein proposal would have been rejected outright. And yet, that year 1905 has been called Einstein’s miracle year. Yes, he was a patent clerk working in Bern, Switzerland, without a university affiliation. He had neither access to a laboratory nor equipment. He worked in isolation on evenings and weekends and was unknown in the physics community. Yet, despite those disadvantages, he produced four revolutionary papers on the Photoelectric Effect, Brownian motion, Special Relativity, and the famous E=mc2 energy-mass equivalence.

Taken as a whole, the work was purely theoretical. There were no preliminary data. The papers challenged fundamental assumptions of the field and, as such, were highly speculative and definitively high-risk. There were no broader impacts because there were no immediate practical applications. And the work was inherently multidisciplinary, bridging mechanics, optics, and thermodynamics. Yet, the work was transformative. By modern grant standards, Einstein’s work failed every criterion.

The Modern Grant Application – A Thought Experiment

Let’s imagine Einstein’s 1905 work packaged as a current NSF proposal. What would it look like, and how would it fare in peer review?

Einstein’s Hypothetical NSF Proposal

Project Title: Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light

Principal Investigator: Albert Einstein, Technical Expert Third Class, Swiss Federal Patent Office

Institution: None (individual applicant)

Requested Duration: 3 years

Budget: $150,000 (minimal – just salary support and travel to one conference)

Project Summary

This proposal challenges the fundamental assumptions underlying Newtonian mechanics and Maxwell’s electromagnetic theory. I propose that space and time are not absolute but relative, dependent on the observer’s state of motion. This requires abandoning the concept of the luminiferous ether and reconceptualizing the relationship between matter and energy. The work will be entirely theoretical, relying on thought experiments and mathematical derivation to establish a new framework for understanding physical reality.

How NSF Review Panels Would Evaluate This

Intellectual Merit: Poor

Criterion: Does the proposed activity advance knowledge and understanding?

Panel Assessment: The proposal makes extraordinary claims without adequate preliminary data. The applicant asserts that Newtonian mechanics—the foundation of physics for over 200 years—requires fundamental revision yet provides no experimental evidence supporting this radical departure.

Specific Concerns:

Lack of Preliminary Results: The proposal contains no preliminary data demonstrating the feasibility of the approach. There are no prior publications by the applicant in peer-reviewed physics journals. The applicant references his own unpublished manuscripts, which cannot be evaluated.

Methodology Insufficient: The proposed “thought experiments” do not constitute rigorous scientific methodology. How will hypotheses be tested? What experimental validation is planned? The proposal describes mathematical derivations but provides no pathway to empirical verification. Without experimental confirmation, these remain untestable speculations.

Contradicts Established Science: The proposal challenges Newton’s laws of motion and the existence of the luminiferous ether—concepts supported by centuries of successful physics. While scientific progress requires questioning assumptions, such fundamental challenges require extraordinary evidence. The applicant provides none.

Lack of Expertise: The PI works at a patent office and has no formal research position. He has no advisor supporting this work, no collaborators at research institutions, and no track record in theoretical physics. His biosketch lists a doctorate from the University of Zurich but no subsequent research appointments or publications in relevant areas.

Representative Reviewer Comments:

Reviewer 1: “While the mathematical treatment shows some sophistication, the fundamental premise—that simultaneity is relative—contradicts basic physical intuition and has no experimental support. The proposal reads more like philosophy than physics.”

Reviewer 2: “The applicant’s treatment of the photoelectric effect proposes that light behaves as discrete particles, directly contradicting Maxwell’s well-established wave theory. This is not innovation; it’s contradiction without justification.”

Reviewer 3: “I appreciate the applicant’s ambition, but this proposal is not ready for funding. I recommend the PI establish himself at a research institution, publish preliminary findings, and gather experimental evidence before requesting support for such speculative work. Perhaps a collaboration with experimentalists at a major university would strengthen future submissions.”

Broader Impacts: Very Poor

Criterion: Does the proposed activity benefit society and achieve specific societal outcomes?

Panel Assessment: The proposal fails to articulate any concrete broader impacts. The work is purely theoretical with no clear pathway to societal benefit.

Specific Concerns:

No Clear Applications: The proposal does not explain how reconceptualizing space and time would benefit society. What problems would this solve? What technologies would it enable? The PI suggests the work is “fundamental” but provides no examples of potential applications.

No Educational Component: There is no plan for training students or postdocs. The PI works alone at a patent office, with no access to students and no institutional infrastructure for education and training.

No Outreach Plan: The proposal includes no activities to communicate findings to the public or policymakers. There is no plan for broader dissemination beyond potential publication in physics journals.

Questionable Impact Timeline: Even if the proposed theories are correct, the proposal provides no timeline for practical applications. How long until these ideas translate into societal benefit? The proposal is silent on this critical question.

Representative Reviewer Comments:

Reviewer 1: “The broader impacts section is essentially non-existent. The PI states that ‘fundamental understanding of nature has intrinsic value,’ but this does not meet NSF’s requirement for concrete societal outcomes.”

Reviewer 2: “I cannot envision how this work, even if successful, would lead to practical applications within a reasonable timeframe. The proposal needs to articulate a clear pathway from theory to impact.”

Reviewer 3: “NSF has limited resources and must prioritize research with demonstrable benefits to society. This proposal does not make that case.”

Panel Summary and Recommendation

Intellectual Merit Rating: Poor
Broader Impacts Rating: Very Poor

Overall Assessment: While the panel appreciates the PI’s creativity and mathematical ability, the proposal is highly speculative, lacks preliminary data, contradicts established physical laws without sufficient justification, and fails to articulate broader impacts. The PI’s lack of institutional affiliation and research track record raises concerns about feasibility.

The panel notes that the PI appears talented and encourages resubmission after:

  1. Establishing an independent position at a research institution
  2. Publishing preliminary findings in peer-reviewed journals
  3. Developing collaborations with experimental physicists
  4. Articulating a clearer pathway to practical applications
  5. Demonstrating broader impacts through education and outreach

Recommendation: Decline

Panel Consensus: Not competitive for funding in the current cycle. The proposal would need substantial revision and preliminary results before it could be considered favorably.

The Summary Statement Einstein Would Receive

Dear Dr. Einstein,

Thank you for your submission to the National Science Foundation. Unfortunately, your proposal, “Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light,” was not recommended for funding.

The panel recognized your ambition and mathematical capabilities but identified several concerns that prevented a favorable recommendation:

– Lack of preliminary data supporting the feasibility of your approach – Insufficient experimental validation of your theoretical claims
– Absence of institutional support and research infrastructure – Inadequate articulation of broader impacts and societal benefits

We encourage you to address these concerns and consider resubmission in a future cycle. You may wish to establish collaborations with experimentalists and develop a clearer pathway from theory to application.

We appreciate your interest in NSF funding and wish you success in your future endeavors.

Sincerely,
NSF Program Officer

And that would be it. Einstein’s miracle year—four papers that transformed physics and laid the groundwork for quantum mechanics, nuclear energy, GPS satellites, and our modern understanding of the cosmos—would have died in peer review, never funded, never attempted.

The system would have protected us from wasting taxpayer dollars on such speculation. It would have worked exactly as designed.

The Preliminary Data Paradox

The contemporary scientific grant review process implicitly expects foundational work in transformative science to present preliminary data, despite knowing that truly groundbreaking ideas often do not originate from such tangible evidence but instead evolves through thought experiments and mathematical derivations, as Einstein did. This unrealistic expectation stifles innovation at its core – the process essentially forces researchers like Einstein to abandon pure theoretical exploration and confine them to a narrow experimental framework, where they cannot freely challenge existing paradigms, even when their work holds no immediate empirical validation yet promises to revolutionize our understanding fundamentally.

The Risk-Aversion Problem

Often, in grant reviews, I see a very junior reviewer criticize work as being too risky—dooming the proposal to failure—while simultaneously sensing their admiration for the promise and transformative nature of the work. The conservative nature and risk-averse mentality of modern grant review panels are deeply rooted in the scientific community’s culture that values incremental advances over speculative leaps – a bias born from career motivations wherein funding decisions can make or break one’s professional trajectory. Reviewers often exhibit reluctance to invest support into proposals like Einstein’s, as they pose potential controversy and may not align with personal research interests due to the associated risks of failure – a reflection of how science has traditionally evolved through evolutionary rather than revolutionary processes within academic institutions.

The Credentials Catch-22

To secure funding in today’s scientific landscape, one often needs institutional affiliation and an impressive publication record that reflects strong research credentials – a catch-22 scenario wherein groundbreaking innovators with no formal backing or prior experience find it challenging to gain the trust of reviewers. This requirement discriminates against fresh perspectives from individuals such as Einstein, who was working outside established institutions and did not have access to mentorship, which is typically deemed necessary for academic recognition – a stark contrast in how transformative outsider thinkers with unconventional backgrounds historically nurtured science.

The Short-Term Timeline Problem

Einstein developed special relativity over years with no milestones, no quarterly reports, no renewals. How would he answer, ‘What will you accomplish in Year 2?” The funding cycle durations set forth by major grant agencies, such as NSF’s typical three to five years for regular grants and the NIH’s maximum of five years, do not accommodate the long periods necessary for fully developing foundational theories that require time-intensive evolution. Such timelines impose an unfair constraint on researchers like Einstein, whose transformative ideas did not evolve within strict milestones but unfolded in an unconstrained fashion – showcasing the incompatibility of this model with truly revolutionary scientific discoveries where a linear progression is unrealistic and even counterproductive.

The Impact Statement Trap

Requirements for demonstrating immediate “broader impacts” or societal benefits pose significant obstacles to transformative research proposals that often envision far-reaching implications beyond their direct applications – an aspect Einstein’s work exemplifies best with its foundational role in advancing physics. The trap lies when reviewers, fearing potential misuse of speculative science or unable to perceive future benefits due to cognitive biases, force research proposals into a mold where immediate practical impact takes precedence over visionary scientific contributions, further marginalizing transformative studies that could potentially unlock new dimensions in various fields.

The Interdisciplinary Gap

The inherent disciplinarity of current grant funding schemes disconnects them from the interdisciplinary essence required for revolutionary research proposals like Einstein’s – a reality where transformative work frequently transcends conventional academic boundaries by merging concepts across multiple fields. This approach often results in an exclusion not only based on institutional affiliation but also because of its challenge to compartmentalized funding models that struggle with the non-linear, cross-disciplinary nature integral to truly transformative science – a significant obstacle for proposals inherently interdisciplinary yet unable to fit neatly within program structures or expertise.

The hypothetical funding scenarios for transformational science, as presented through the lens of Albert Einstein’s groundbreaking work, illustrate the inherent challenges faced by revolutionary ideas. To further highlight this problem, let’s take a look at other seminal discoveries that may have been overlooked or deemed unworthy of support under current grant review criteria:

Copernicus’ Heliocentric Model: In a contemporary setting, Copernicus’ heliocentric model might face skepticism due to its challenge to the widely accepted geocentric view of the universe. Lacking preliminary data and facing resistance from established religious beliefs, his proposal would likely be rejected under modern grant review criteria, despite its ultimate validation through observation and mathematical proof.

Gregor Mendel’s Pea Plant Experiments: The foundation of modern genetics was laid by Mendel’s pea plant experiments, yet his work remained largely unnoticed for decades after its initial publication. A grant reviewer in 1863 would likely have dismissed Mendel’s findings as too speculative and without immediate practical applications, thereby overlooking the fundamental insights he provided about heredity and genetic inheritance.

mRNA Vaccines: Katalin Karikó spent decades struggling to fund mRNA therapeutic research. Too risky. Too speculative. No clear applications. Penn demoted her. NIH rejected her grants. Reviewers wanted proof that mRNA could work as a therapeutic platform, but without funding, she couldn’t generate that proof. Then COVID-19 hit, and mRNA vaccines saved millions of lives. The technology that couldn’t get funded became one of the most important medical breakthroughs of the century.

Why does all of this matter now? First, the evidence is mounting that American science is at an inflection point. The rate of truly disruptive discoveries—those that reshape fields rather than incrementally advance them—has been declining for decades, even as scientific output has grown. Both NSF and NIH leadership recognize this troubling trend.

This innovation crisis manifests in the problems we cannot solve. Cancer and Alzheimer’s have resisted decades of intensive research. AI alignment and safety remain fundamentally unsolved as we deploy increasingly powerful systems. We haven’t returned to the moon in over 50 years. In my own field of neuroscience, incremental progress has failed to produce treatments for the diseases that devastate millions of families.

These failures point to a deeper problem: we’ve optimized our funding system for incremental advances, not transformational breakthroughs. Making matters worse, we’re losing ground internationally. China’s funding models allow longer timelines and embrace higher risk. European ERC grants support more adventurous research. Many of our best researchers now weigh opportunities overseas or in industry, where they can pursue riskier ideas with greater freedom.

What Needs to Change

Fixing this requires fundamental changes at multiple levels—from how we structure programs to how we evaluate proposals to how we support unconventional researchers.

Create separate funding streams for high-risk research. NSF and NIH need more programs that emulate DARPA’s high-risk, high-reward model. These programs should be insulated from traditional grant review: no preliminary data required, longer timelines (10+ years), and peer review conducted by scientists who have themselves taken major risks and succeeded. I propose that 10 percent of each agency’s budget be set aside for “Einstein Grants”—awards that take the view opposite the status quo. Judge proposals on originality and potential impact, not feasibility and preliminary data. Accept that most will fail, but the few that succeed will be transformational.

Protect exploratory research within traditional programs. Even standard grant programs should allow pivots when researchers discover unexpected directions. We should fund people with track records of insight, not just projects with detailed timelines. Judge proposals on the quality of thinking, not the completeness of deliverables.

Reform peer review processes. The current system needs three critical changes. First, separate review tracks for incremental versus transformational proposals—they require fundamentally different evaluation criteria. Second, don’t let a single negative review kill bold ideas; if three reviewers are enthusiastic and one is skeptical, fund it. Third, value originality over feasibility. The most transformational ideas often sound impossible until someone proves otherwise.

Support alternative career paths. We should fund more researchers outside traditional academic institutions and recognize that the best science doesn’t always emerge from R1 universities. Explicitly value interdisciplinary training and create flexible career paths that don’t punish researchers who take time to develop unconventional ideas. Track where our most creative researchers go when they leave academia—if we’re consistently losing them to industry or foreign institutions, that’s a failure signal we must heed.

Acknowledge the challenge ahead. These reforms require sustained political will across multiple administrations and consistent support from Congress. They demand patience—accepting that transformational breakthroughs can’t be scheduled or guaranteed. But the alternative is clear: we continue optimizing for incremental progress while the fundamental problems remain unsolved and our international competitors embrace the risk we’ve abandoned.

The choice before us is stark. We can optimize the current system for productivity—incremental papers, measurable progress—or we can create space for transformative discovery. We cannot have both with the same funding mechanisms.

The cost of inaction is clear: we will miss the next Einstein, fall further behind in fundamental discovery, watch science become a bureaucratic exercise, and lose what made American science into a powerhouse of discovery.

This requires action at every level. Scientists must advocate for reform and be willing to champion risky proposals. Program officers must have the courage to fund work that reviewers call too speculative. Policymakers must create new funding models and resist the temptation to demand near-term results. The public must understand that breakthrough science looks different from incremental progress—it’s messy, unpredictable, and often wrong before it’s right.

In 1905, Einstein changed our understanding of the universe while working in a patent office with no grant funding. Today, our funding system would never have let him try. We need to fix that.

Nothing but tundra: How NEON almost died…

It was 10:30 PM and bright daylight under a blue sky when my helicopter lifted away from Toolik Biological Station, just south of the Brooks Range in Northern Alaska. The permafrost tundra was a lime green below us—it was the middle of June, and this was the short window for photosynthesis. I had come from NSF headquarters in Arlington Virginia because I didn’t believe what I’d been told: that the NEON site here was nearly complete.

The reports said construction was well underway. According to the updates crossing my desk at the National Science Foundation, work at this particular remote monitoring station was progressing on schedule—tower foundation laid, equipment deliveries confirmed, site preparation complete. The National Ecological Observatory Network was building a continental-scale observatory, eighty field sites across America measuring everything from soil microbes to atmospheric carbon, from Alaska to Puerto Rico. Half a billion dollars. A thirty-year mission to understand how ecosystems were changing across an entire continent.

The helicopter banked north. We were close. I pressed against the window.

We were above the site’s GPS coordinates. There was nothing. Just tundra. I pressed the video function on my iPhone and collected the data.

No tower foundation. No equipment. No site preparation. The primeval landscape stretched unbroken to the horizon, exactly as it had for ten thousand years since the last ice age retreated. According to the paperwork, construction was well underway, and millions had been spent. According to reality, no one had broken ground.

That’s when I knew we would have to fire them and find another builder.

The problem started, as these things often do, with the best of intentions.

When the National Science Foundation conceived NEON in the early 2000s, the vision was breathtaking: a network of standardized ecological observatories spanning the entire territory of the United States, collecting identical measurements from tundra to tropics, mountains to prairies. For the first time, ecologists could compare apples to apples—soil microbes in Kansas versus Massachusetts, carbon flux in Alaska versus Alabama, all measured with the same instruments, the same protocols, for thirty years.

It was exactly the kind of transformational infrastructure science needed. And so NSF did what seemed entirely logical: we created a nonprofit corporation, NEON Inc., and populated its board with distinguished ecologists who understood the science.

They were brilliant scientists. They understood ecosystems, biogeochemistry, microbial ecology. They had collectively published thousands of papers, trained generations of graduate students, and spent careers asking profound questions about how nature works.

What they didn’t understand was how to manage a half-billion-dollar construction project.

By the time I arrived at NSF as Assistant Director for Biological Sciences in 2014, NEON was hemorrhaging money. The initial budget had been ambitious but defensible. Now we were looking at an $80 million cost overrun with no clear path to completion. Construction timelines had slipped repeatedly. Some sites that should have been operational were years behind schedule. And as I’d just discovered in Alaska, some sites that were reported as “progressing” didn’t exist at all.

The reporting problem was symptomatic of a deeper issue. NEON Inc.’s board met quarterly, reviewed progress reports, asked questions—but they were asking the wrong questions. They scrutinized scientific protocols: Were the soil samples being collected at the correct depth? Was the CO2 sensor calibration adequate? These were important questions, but they weren’t the questions that would determine whether NEON actually got built.

They should have been asking: Why is the tower foundation delayed by six months? What’s the critical path dependency? Where are the project management controls? Who’s accountable when a milestone slips?

These weren’t their questions because these weren’t their skills. You don’t learn construction project management by studying forest ecology. You don’t learn procurement logistics by measuring carbon flux. The board was doing exactly what they’d been trained to do—think like scientists. The problem was, NEON needed someone thinking like a construction manager.

When I got back to D.C., that video became the smoking gun and we began to clean shop. Not long afterwards, I walked out to the mall parking lot behind our building and dialed the NEON board chair to deliver the news: you’re fired.

The board didn’t go quietly—they felt betrayed. The board chair had been in a leadership role at NSF, himself. The other members were respected ecologists who had viewed me as an academic colleague on rotation to the leadership at the agency. But the math and state of play on the ground were unforgiving. At this rate, the project was going to crash.

At the same time, our oversight folks, including the US National Science Board, Congress and the Inspector General’s office all saw disaster written all over that video. The optics for the agency were appallingly bad.

The decision was easy, but institutionally it was a tricky one. For one thing, we were years into the construction and there was a very real risk of cancellation being forced upon us notwithstanding the sunk costs. I still really believed in the transformational potential for the project. Another problem was finding a new incumbent to finish construction. There wasn’t a lot of precedence for that, not only at NSF, but across the government. We were advantaged in that the funding vehicle was something called a ‘cooperative agreement’ and not a contract, but when we changed horses, we would be going full tilt—this had to be done at speed.

Congress helped. During my testimony to the House Science Committee, I was pushed hard on being willing to fire NEON.inc. After some hemming and hawing, while on the hot seat, I agreed: yes, that might be the right course. And then it was all made possible by our general counsel, who found the right emergency mechanism to both gage interest and award the partial cooperative agreement to Battelle.

Battelle brought decades of experience building big projects for the US government at the scale that NEON required. They approached the rescue with a ‘can do’ attitude, no doubt incentivized by the opportunity to compete for an operations and maintenance award upon completion of construction.

On the in-house side, we changed up things just as massively. The project management was moved to our Division of Biological Infrastructure where we had the human capital to work productively on the rescue. The new team included some of the most talented project managers I’ve ever worked with.

Today, NEON is fully operational. Commissioned in 2019, it’s producing open data as a multidimensional time series of how the United States biosphere is evolving across nearly 200 data products. It’s director and chief scientist is a scientific visionary with her own substantial track record of scientific discovery. Battelle got the job done and the rescue was a success.

But NEON was not unique. We constantly confuse domain expertise with managerial expertise. The job of R1 university president is now too far removed from the traditional academic pipeline that produced the farm team for those jobs. Similarly, physician leaders responding to public health emergencies must also have a sophisticated understanding of political nuance and public communications. If they don’t, then they risk both trust and the public welfare.

What was the lesson of NEON? Honestly the hard truth: sometimes firing the leadership is the most respectful option.

“What Grant Reviewers Actually Look For (and What They Ignore)”

A close colleague of mine at a major US research university begins the process of preparing a grant proposal by creating something he calls a “storyboard”.  When I was growing up in LA, the concept of a storyboard was very familiar to me.  Many of my high school friends, at the time, aspired to careers in the locally dominant entertainment industry. The storyboard, invented by Walt Disney, used pictures to visualize a movie’s plot flow before production—often even before a screenplay was complete. In the LA movie business, you could look at a storyboard and pretty much get right away what a movie is about.

Back to the colleague of mine who uses storyboard to create grant proposals—his key idea is that you’re done making the storyboard, when someone outside the group can come in, look at it, and come away with a good understanding of what the grant is all about. If the storyboard is coherent, then it’s easy to make the proposal coherent as well. Further, the storyboard often gets reused in a modified fashion as the grant’s central graphic. Yes, a picture is worth several thousand words.

My colleague is onto something profound about how grant review works, across all funders, including those in the private sector.  But for this issue of Science Policy Insider, we’re going to consider the agency where I headed up Biological Sciences, the NSF. What about NIH, you may ask? A lot of the principles here go for both agencies. But here, we’re going to focus, laser-like, on the National Science Foundation, even as it undergoes drastic changes.

The Brutal Reality of NSF Panel Review

After sitting through too many grant panels at NSF, I can tell you this: most proposals get 15-20 minutes of discussion time in a panel that’s reviewing 30-50 proposals over three days. Your carefully crafted 15-page research plan? The primary reviewer read it thoroughly. The other two panelists skimmed it. Everyone else glanced at the summary.

This isn’t because reviewers are lazy. They’re exhausted, brilliant researchers who read proposals outside their immediate expertise, often late at night, while also worrying about their own grants, their trainees, and the paper referee statements they owe.

The storyboard approach works because it acknowledges this reality: reviewers are looking for a straightforward narrative they can grasp quickly and defend to the panel.

What Actually Happens in Review Panels

Here’s how it typically unfolds:

9:00 AM, Day Two of panel: The primary reviewer presents your proposal. They have 5 minutes to summarize your aims, approach, and why it matters. If they struggle to articulate your story coherently, you’re in trouble—not because your proposed science is bad, but because they can’t effectively advocate for you.

The secondary and tertiary reviewers add their perspectives. Then the panel discusses. The program officers watch for enthusiasm, coherence of the argument, and whether anyone is deeply opposed.

The proposals that succeed have champions—reviewers who “get it” immediately and can explain why it matters to others. The storyboard method facilitates championing reviewability.

What Reviewers Actually Look For

After watching this process play out thousands of times, here’s what I learned reviewers truly care about:

1. Can I explain this to the panel in 3 minutes?

If your research plan requires a flowchart to understand, the primary reviewer will simplify it—possibly incorrectly. Better to give them the simplified version yourself.

2. Is the question worth answering?

Not “is this interesting?” but “will anyone care about the answer?” Reviewers need to justify spending taxpayer money. Give them that justification explicitly.

3. Can this person actually do this?

No matter what is written down in the solicitation, preliminary data matters enormously, but not for the reason applicants think. It’s not about proving the hypothesis—it’s about proving you have the technical capability and haven’t missed an obvious problem.

4. Is this the right approach?

Reviewers are surprisingly forgiving about whether your specific hypothesis is correct. They’re much less forgiving about whether you’re using appropriate methods or have thought through alternatives.

5. Will this move the field forward?

Notice: not “revolutionize” or “transform”—just move forward. Incremental progress from a well-designed study beats a transformative idea with unclear methods. But doesn’t the call state that the proposed work should change the world? Sure, but from a practical standpoint, what counts for the reviewers is steady progress. And here’s the tricky part: while steady is key for the reviewers, transformative really is important for the program officers who make the penultimate decision. So, a balance is necessary.

What Reviewers Ignore (Even Though You Spent Weeks on It)

The extensive literature review: They skim it to see if you know the field. The 47 citations demonstrating your comprehensive knowledge? They checked that you cited the key papers and moved on.

Your detailed budget justification: Unless something looks wildly off, reviewers assume you know what your research costs. The line-by-line explanation of why you need that particular microscope? Skimmed.

Your publication list: They look at: Do you publish in good journals? Are you productive? Have you published on this topic before? That’s it. The distinction between your 47th and 52nd paper doesn’t matter.

The broader impacts section that you agonized over: I feel guilty about this because, I’ve often harped about broader impacts as a central criterion. Truth: most reviewers read this quickly to verify you addressed it competently. Unless it’s either exceptional or terrible, it rarely drives funding decisions. And these days, broader impacts means how the work will benefit all American citizens (think public health) or US National security.

The Elements That Actually Drive Decisions

Clarity of the research goals: Can the reviewer recite your three main questions without looking at the proposal? If not, rewrite.

Logical flow: Does each aim build on the previous one? Or are they three unrelated projects stapled together? Reviewers can tell.

Feasibility signals: Preliminary data, established collaborations, access to necessary resources, realistic timeline. These say, “this person will actually complete this work.”

Positioning: Is this filling a real gap, or are you slightly tweaking someone else’s approach? Reviewers want to fund work that moves us somewhere new, even if incrementally.

The writing quality: Clear, direct prose suggests clear thinking. Dense, jargon-heavy writing suggests unclear thinking (even if that’s unfair).

The Most Common Mistake

Applicants try to impress reviewers with complexity and comprehensiveness. They want to show they’ve thought of everything, considered every alternative, read every paper.

But reviewers are looking for clarity and confidence. They want to understand quickly what you’re proposing and why it matters. They want to feel confident you’ll succeed.

The storyboard method works because it forces simplicity. If you can’t draw a simple picture of your proposal that an outsider immediately understands, you don’t have a fundable story yet.

But Wait, There’s More

As hinted at above, at NSF, that panel review…. it’s strictly advisory. I’ve personally seen proposals with excellent reviews get declined and the reverse. The key decisional person? That’s the cognizant program officer for the solicitation. These days, there’s an additional vetting to look for alignment with the Administration’s political goals, but that’s a topic for a future newsletter.

What This Means for Your Proposal

Before you write a single word:

  • Can you explain your project in three sentences?
  • Can someone outside your subfield understand why it matters?
  • Do you have a clear narrative arc from question to approach to impact?

If not, you’re not ready to write. You’re ready to storyboard.

Build the simple, clear story first. Then elaborate carefully, making sure every detail serves that core narrative.

Reviewers are smart, busy people trying to identify good science under time pressure. Don’t make them work to understand your brilliance. Give them a story they can grasp, defend, and champion.

That’s what my colleague understood. And based on his funding success rate, the reviewers appreciate it.

From Andy Kessler at WSJ: Beyond the R1 University

My undergraduate class has been considering this problem as part of their midterm paper, here. My reading for some time has been that the authors of Project 2025 are aware of Science: The Endless Frontier and reject its usefulness for today. Kessler’s op ed in today’s WSJ opens a window into an alternative that represents a scaled-up up of what existed before the Second World War in institutions such as Bell Labs and the Carnegie Institution of Washington.