Why I’m Taking Science Policy Insider International

A View from Abroad

Mid-competition week for a panel reviewing proposals on genes and cells: the fifteen-minute clock starts, and the five of us assigned to this proposal dive in. We consider factors such as whether the proposer is early in their career and how the COVID pandemic might have affected their laboratory’s productivity. We carefully assess their plan for mentoring trainees, including their previous track record and plans. The excellence of the proposer is evaluated, not by raw bibliometric measures such as H-index, but by substantive contributions to the field. And we take a very close look at the proposal itself—not only in terms of intellectual merit, but also to make sure that it is distinct from the investigator’s other supported science. Is this an NIH study section? Nope. Is this an NSF panel? Again, no. This is a peer review for another G7 nation, to be unnamed in this post.

What struck me wasn’t that this country did peer review differently than NSF or NIH. What struck me was how similar it was. Same careful attention to mentoring. Same suspicion of bibliometrics. Same concern about overlaps with existing funding. I could have been in any panel room I’d sat in over three decades in Washington. And that’s when it hit me: among the wealthy nations that fund science, we’re all running variations on the same basic system. We argue about details – overhead rates, review criteria, funding durations – but we share fundamental assumptions about how science should work.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

Or so I thought. Until I stepped outside the world of science funding and began looking at how other countries organize technical knowledge. My second book project examines how Boeing, Airbus, and Embraer design commercial aircraft – and that research has revealed something I’d missed in all my years in government and academia.

Civic Epistemologies

The scholar Sheila Jasanoff has a concept called ‘civic epistemologies’ – the idea that different societies have fundamentally different ways of producing and validating knowledge. It’s not about organizational charts or funding mechanisms. It’s deeper than that. It’s about cultural assumptions: What questions are worth asking? What counts as evidence? Who gets to decide? How do we measure success?

When Americans design an airplane, we assume that technical decisions should be made by engineers based on data, with regulators checking compliance after the fact. Europeans embed social and labor concerns directly into the design process – workers’ councils have a say in production methods, and safety regulators are involved earlier. Brazilians organize around different assumptions entirely, shaped by their position as a developing economy entering a market dominated by established players.

Same engineering principles. Same physics. The same goal of building a safe, efficient aircraft. But fundamentally different answers to the question: Who should decide how this gets done?

I saw the same pattern as a working neuroscientist. American neuroscience tends to bet on fundamental discovery—map the circuits, understand the mechanisms, and applications would follow. Recording sea slug neurons during my training embodied this approach: study simpler systems, find conserved principles, apply them to humans. Europeans start closer to the clinic, organizing major research programs around disease categories and patient needs. Japanese neuroscience builds unusually tight links between academic labs and industry—electronics and engineering companies actively embedded in research networks, with clear paths toward commercialization: same neurons, same biology—different assumptions about how knowledge should flow from laboratory to society.

My new book project

So, where is this taking me? The short answer is I’m working on a new book about how American, European, and Brazilian cultures (think Boeing, Airbus, and Embraer) shape commercial aviation technology. Why planes? In my lifetime, I experienced firsthand the jet revolution: I started on the Comet, went on to the Pan Am 707s, and these days still enjoy the grandeur of the big twin aisle giants that connect us across oceans.

In the new book, I’m interested in comparing technical cultures through the lens of those jets (as technical artifacts). But beyond my lifetime fascination with aviation, the same questions apply to science policy itself: why do different countries organize technological knowledge differently? What can we learn from how other G7 nations fund science? And what cultural assumptions shape what gets built (airplanes OR research programs)?

Science Policy Insider Expands Its Scope

This brings me back to Science Policy Insider and where we’re headed. We are broadening our remit. In the future, we’ll expand to include a comparative analysis of research funding systems—both public agencies and private industry—drawing on insights from my aviation research. We’ll examine how different countries handle current challenges: AI governance, climate research, and research security.

On the practical side, we’ll provide insights for American researchers who work internationally or plan to—from navigating different grant systems to understanding why collaborations succeed or fail across cultural boundaries. And above all, we’ll consider what viewing American science policy from the outside reveals about our own system.

We’ll maintain our bi-weekly publishing schedule.

Science Policy Insider started with my promise to explain how American science policy really works from someone who was inside the system. Now we’re also going to explore what it looks like from the outside and what that perspective reveals about our own system.

I continue to invite readers’ questions, now not only about how things work in our own American discovery machine, but also about international science policy.

Why Transformational Science Can’t Get Funded: The Einstein Problem

Proposal declined. Insufficient institutional support. No preliminary data. Applicant lacks relevant expertise—they work in a patent office, not a research laboratory. The proposed research is too speculative and challenges well-established physical laws without adequate justification. The principal investigator is 26 years old and has no prior experience in physics.

This would have been the fate of Albert Einstein in 1905, had the NSF existed as it does today. Even with grant calls requesting ‘transformative ideas,’ an Einstein proposal would have been rejected outright. And yet, that year 1905 has been called Einstein’s miracle year. Yes, he was a patent clerk working in Bern, Switzerland, without a university affiliation. He had neither access to a laboratory nor equipment. He worked in isolation on evenings and weekends and was unknown in the physics community. Yet, despite those disadvantages, he produced four revolutionary papers on the Photoelectric Effect, Brownian motion, Special Relativity, and the famous E=mc2 energy-mass equivalence.

Taken as a whole, the work was purely theoretical. There were no preliminary data. The papers challenged fundamental assumptions of the field and, as such, were highly speculative and definitively high-risk. There were no broader impacts because there were no immediate practical applications. And the work was inherently multidisciplinary, bridging mechanics, optics, and thermodynamics. Yet, the work was transformative. By modern grant standards, Einstein’s work failed every criterion.

The Modern Grant Application – A Thought Experiment

Let’s imagine Einstein’s 1905 work packaged as a current NSF proposal. What would it look like, and how would it fare in peer review?

Einstein’s Hypothetical NSF Proposal

Project Title: Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light

Principal Investigator: Albert Einstein, Technical Expert Third Class, Swiss Federal Patent Office

Institution: None (individual applicant)

Requested Duration: 3 years

Budget: $150,000 (minimal – just salary support and travel to one conference)

Project Summary

This proposal challenges the fundamental assumptions underlying Newtonian mechanics and Maxwell’s electromagnetic theory. I propose that space and time are not absolute but relative, dependent on the observer’s state of motion. This requires abandoning the concept of the luminiferous ether and reconceptualizing the relationship between matter and energy. The work will be entirely theoretical, relying on thought experiments and mathematical derivation to establish a new framework for understanding physical reality.

How NSF Review Panels Would Evaluate This

Intellectual Merit: Poor

Criterion: Does the proposed activity advance knowledge and understanding?

Panel Assessment: The proposal makes extraordinary claims without adequate preliminary data. The applicant asserts that Newtonian mechanics—the foundation of physics for over 200 years—requires fundamental revision yet provides no experimental evidence supporting this radical departure.

Specific Concerns:

Lack of Preliminary Results: The proposal contains no preliminary data demonstrating the feasibility of the approach. There are no prior publications by the applicant in peer-reviewed physics journals. The applicant references his own unpublished manuscripts, which cannot be evaluated.

Methodology Insufficient: The proposed “thought experiments” do not constitute rigorous scientific methodology. How will hypotheses be tested? What experimental validation is planned? The proposal describes mathematical derivations but provides no pathway to empirical verification. Without experimental confirmation, these remain untestable speculations.

Contradicts Established Science: The proposal challenges Newton’s laws of motion and the existence of the luminiferous ether—concepts supported by centuries of successful physics. While scientific progress requires questioning assumptions, such fundamental challenges require extraordinary evidence. The applicant provides none.

Lack of Expertise: The PI works at a patent office and has no formal research position. He has no advisor supporting this work, no collaborators at research institutions, and no track record in theoretical physics. His biosketch lists a doctorate from the University of Zurich but no subsequent research appointments or publications in relevant areas.

Representative Reviewer Comments:

Reviewer 1: “While the mathematical treatment shows some sophistication, the fundamental premise—that simultaneity is relative—contradicts basic physical intuition and has no experimental support. The proposal reads more like philosophy than physics.”

Reviewer 2: “The applicant’s treatment of the photoelectric effect proposes that light behaves as discrete particles, directly contradicting Maxwell’s well-established wave theory. This is not innovation; it’s contradiction without justification.”

Reviewer 3: “I appreciate the applicant’s ambition, but this proposal is not ready for funding. I recommend the PI establish himself at a research institution, publish preliminary findings, and gather experimental evidence before requesting support for such speculative work. Perhaps a collaboration with experimentalists at a major university would strengthen future submissions.”

Broader Impacts: Very Poor

Criterion: Does the proposed activity benefit society and achieve specific societal outcomes?

Panel Assessment: The proposal fails to articulate any concrete broader impacts. The work is purely theoretical with no clear pathway to societal benefit.

Specific Concerns:

No Clear Applications: The proposal does not explain how reconceptualizing space and time would benefit society. What problems would this solve? What technologies would it enable? The PI suggests the work is “fundamental” but provides no examples of potential applications.

No Educational Component: There is no plan for training students or postdocs. The PI works alone at a patent office, with no access to students and no institutional infrastructure for education and training.

No Outreach Plan: The proposal includes no activities to communicate findings to the public or policymakers. There is no plan for broader dissemination beyond potential publication in physics journals.

Questionable Impact Timeline: Even if the proposed theories are correct, the proposal provides no timeline for practical applications. How long until these ideas translate into societal benefit? The proposal is silent on this critical question.

Representative Reviewer Comments:

Reviewer 1: “The broader impacts section is essentially non-existent. The PI states that ‘fundamental understanding of nature has intrinsic value,’ but this does not meet NSF’s requirement for concrete societal outcomes.”

Reviewer 2: “I cannot envision how this work, even if successful, would lead to practical applications within a reasonable timeframe. The proposal needs to articulate a clear pathway from theory to impact.”

Reviewer 3: “NSF has limited resources and must prioritize research with demonstrable benefits to society. This proposal does not make that case.”

Panel Summary and Recommendation

Intellectual Merit Rating: Poor
Broader Impacts Rating: Very Poor

Overall Assessment: While the panel appreciates the PI’s creativity and mathematical ability, the proposal is highly speculative, lacks preliminary data, contradicts established physical laws without sufficient justification, and fails to articulate broader impacts. The PI’s lack of institutional affiliation and research track record raises concerns about feasibility.

The panel notes that the PI appears talented and encourages resubmission after:

  1. Establishing an independent position at a research institution
  2. Publishing preliminary findings in peer-reviewed journals
  3. Developing collaborations with experimental physicists
  4. Articulating a clearer pathway to practical applications
  5. Demonstrating broader impacts through education and outreach

Recommendation: Decline

Panel Consensus: Not competitive for funding in the current cycle. The proposal would need substantial revision and preliminary results before it could be considered favorably.

The Summary Statement Einstein Would Receive

Dear Dr. Einstein,

Thank you for your submission to the National Science Foundation. Unfortunately, your proposal, “Reconceptualizing the Fundamental Nature of Space, Time, and the Propagation of Light,” was not recommended for funding.

The panel recognized your ambition and mathematical capabilities but identified several concerns that prevented a favorable recommendation:

– Lack of preliminary data supporting the feasibility of your approach – Insufficient experimental validation of your theoretical claims
– Absence of institutional support and research infrastructure – Inadequate articulation of broader impacts and societal benefits

We encourage you to address these concerns and consider resubmission in a future cycle. You may wish to establish collaborations with experimentalists and develop a clearer pathway from theory to application.

We appreciate your interest in NSF funding and wish you success in your future endeavors.

Sincerely,
NSF Program Officer

And that would be it. Einstein’s miracle year—four papers that transformed physics and laid the groundwork for quantum mechanics, nuclear energy, GPS satellites, and our modern understanding of the cosmos—would have died in peer review, never funded, never attempted.

The system would have protected us from wasting taxpayer dollars on such speculation. It would have worked exactly as designed.

The Preliminary Data Paradox

The contemporary scientific grant review process implicitly expects foundational work in transformative science to present preliminary data, despite knowing that truly groundbreaking ideas often do not originate from such tangible evidence but instead evolves through thought experiments and mathematical derivations, as Einstein did. This unrealistic expectation stifles innovation at its core – the process essentially forces researchers like Einstein to abandon pure theoretical exploration and confine them to a narrow experimental framework, where they cannot freely challenge existing paradigms, even when their work holds no immediate empirical validation yet promises to revolutionize our understanding fundamentally.

The Risk-Aversion Problem

Often, in grant reviews, I see a very junior reviewer criticize work as being too risky—dooming the proposal to failure—while simultaneously sensing their admiration for the promise and transformative nature of the work. The conservative nature and risk-averse mentality of modern grant review panels are deeply rooted in the scientific community’s culture that values incremental advances over speculative leaps – a bias born from career motivations wherein funding decisions can make or break one’s professional trajectory. Reviewers often exhibit reluctance to invest support into proposals like Einstein’s, as they pose potential controversy and may not align with personal research interests due to the associated risks of failure – a reflection of how science has traditionally evolved through evolutionary rather than revolutionary processes within academic institutions.

The Credentials Catch-22

To secure funding in today’s scientific landscape, one often needs institutional affiliation and an impressive publication record that reflects strong research credentials – a catch-22 scenario wherein groundbreaking innovators with no formal backing or prior experience find it challenging to gain the trust of reviewers. This requirement discriminates against fresh perspectives from individuals such as Einstein, who was working outside established institutions and did not have access to mentorship, which is typically deemed necessary for academic recognition – a stark contrast in how transformative outsider thinkers with unconventional backgrounds historically nurtured science.

The Short-Term Timeline Problem

Einstein developed special relativity over years with no milestones, no quarterly reports, no renewals. How would he answer, ‘What will you accomplish in Year 2?” The funding cycle durations set forth by major grant agencies, such as NSF’s typical three to five years for regular grants and the NIH’s maximum of five years, do not accommodate the long periods necessary for fully developing foundational theories that require time-intensive evolution. Such timelines impose an unfair constraint on researchers like Einstein, whose transformative ideas did not evolve within strict milestones but unfolded in an unconstrained fashion – showcasing the incompatibility of this model with truly revolutionary scientific discoveries where a linear progression is unrealistic and even counterproductive.

The Impact Statement Trap

Requirements for demonstrating immediate “broader impacts” or societal benefits pose significant obstacles to transformative research proposals that often envision far-reaching implications beyond their direct applications – an aspect Einstein’s work exemplifies best with its foundational role in advancing physics. The trap lies when reviewers, fearing potential misuse of speculative science or unable to perceive future benefits due to cognitive biases, force research proposals into a mold where immediate practical impact takes precedence over visionary scientific contributions, further marginalizing transformative studies that could potentially unlock new dimensions in various fields.

The Interdisciplinary Gap

The inherent disciplinarity of current grant funding schemes disconnects them from the interdisciplinary essence required for revolutionary research proposals like Einstein’s – a reality where transformative work frequently transcends conventional academic boundaries by merging concepts across multiple fields. This approach often results in an exclusion not only based on institutional affiliation but also because of its challenge to compartmentalized funding models that struggle with the non-linear, cross-disciplinary nature integral to truly transformative science – a significant obstacle for proposals inherently interdisciplinary yet unable to fit neatly within program structures or expertise.

The hypothetical funding scenarios for transformational science, as presented through the lens of Albert Einstein’s groundbreaking work, illustrate the inherent challenges faced by revolutionary ideas. To further highlight this problem, let’s take a look at other seminal discoveries that may have been overlooked or deemed unworthy of support under current grant review criteria:

Copernicus’ Heliocentric Model: In a contemporary setting, Copernicus’ heliocentric model might face skepticism due to its challenge to the widely accepted geocentric view of the universe. Lacking preliminary data and facing resistance from established religious beliefs, his proposal would likely be rejected under modern grant review criteria, despite its ultimate validation through observation and mathematical proof.

Gregor Mendel’s Pea Plant Experiments: The foundation of modern genetics was laid by Mendel’s pea plant experiments, yet his work remained largely unnoticed for decades after its initial publication. A grant reviewer in 1863 would likely have dismissed Mendel’s findings as too speculative and without immediate practical applications, thereby overlooking the fundamental insights he provided about heredity and genetic inheritance.

mRNA Vaccines: Katalin Karikó spent decades struggling to fund mRNA therapeutic research. Too risky. Too speculative. No clear applications. Penn demoted her. NIH rejected her grants. Reviewers wanted proof that mRNA could work as a therapeutic platform, but without funding, she couldn’t generate that proof. Then COVID-19 hit, and mRNA vaccines saved millions of lives. The technology that couldn’t get funded became one of the most important medical breakthroughs of the century.

Why does all of this matter now? First, the evidence is mounting that American science is at an inflection point. The rate of truly disruptive discoveries—those that reshape fields rather than incrementally advance them—has been declining for decades, even as scientific output has grown. Both NSF and NIH leadership recognize this troubling trend.

This innovation crisis manifests in the problems we cannot solve. Cancer and Alzheimer’s have resisted decades of intensive research. AI alignment and safety remain fundamentally unsolved as we deploy increasingly powerful systems. We haven’t returned to the moon in over 50 years. In my own field of neuroscience, incremental progress has failed to produce treatments for the diseases that devastate millions of families.

These failures point to a deeper problem: we’ve optimized our funding system for incremental advances, not transformational breakthroughs. Making matters worse, we’re losing ground internationally. China’s funding models allow longer timelines and embrace higher risk. European ERC grants support more adventurous research. Many of our best researchers now weigh opportunities overseas or in industry, where they can pursue riskier ideas with greater freedom.

What Needs to Change

Fixing this requires fundamental changes at multiple levels—from how we structure programs to how we evaluate proposals to how we support unconventional researchers.

Create separate funding streams for high-risk research. NSF and NIH need more programs that emulate DARPA’s high-risk, high-reward model. These programs should be insulated from traditional grant review: no preliminary data required, longer timelines (10+ years), and peer review conducted by scientists who have themselves taken major risks and succeeded. I propose that 10 percent of each agency’s budget be set aside for “Einstein Grants”—awards that take the view opposite the status quo. Judge proposals on originality and potential impact, not feasibility and preliminary data. Accept that most will fail, but the few that succeed will be transformational.

Protect exploratory research within traditional programs. Even standard grant programs should allow pivots when researchers discover unexpected directions. We should fund people with track records of insight, not just projects with detailed timelines. Judge proposals on the quality of thinking, not the completeness of deliverables.

Reform peer review processes. The current system needs three critical changes. First, separate review tracks for incremental versus transformational proposals—they require fundamentally different evaluation criteria. Second, don’t let a single negative review kill bold ideas; if three reviewers are enthusiastic and one is skeptical, fund it. Third, value originality over feasibility. The most transformational ideas often sound impossible until someone proves otherwise.

Support alternative career paths. We should fund more researchers outside traditional academic institutions and recognize that the best science doesn’t always emerge from R1 universities. Explicitly value interdisciplinary training and create flexible career paths that don’t punish researchers who take time to develop unconventional ideas. Track where our most creative researchers go when they leave academia—if we’re consistently losing them to industry or foreign institutions, that’s a failure signal we must heed.

Acknowledge the challenge ahead. These reforms require sustained political will across multiple administrations and consistent support from Congress. They demand patience—accepting that transformational breakthroughs can’t be scheduled or guaranteed. But the alternative is clear: we continue optimizing for incremental progress while the fundamental problems remain unsolved and our international competitors embrace the risk we’ve abandoned.

The choice before us is stark. We can optimize the current system for productivity—incremental papers, measurable progress—or we can create space for transformative discovery. We cannot have both with the same funding mechanisms.

The cost of inaction is clear: we will miss the next Einstein, fall further behind in fundamental discovery, watch science become a bureaucratic exercise, and lose what made American science into a powerhouse of discovery.

This requires action at every level. Scientists must advocate for reform and be willing to champion risky proposals. Program officers must have the courage to fund work that reviewers call too speculative. Policymakers must create new funding models and resist the temptation to demand near-term results. The public must understand that breakthrough science looks different from incremental progress—it’s messy, unpredictable, and often wrong before it’s right.

In 1905, Einstein changed our understanding of the universe while working in a patent office with no grant funding. Today, our funding system would never have let him try. We need to fix that.

The Replication Crisis Is a Market Failure (And We Designed It That Way)

Also published on my newsletter

The replication crisis isn’t a mystery. After presiding over the review for thousands of grants at NSF’s Biological Sciences Directorate, I can tell you exactly why science struggles to reproduce its own findings: we built incentives that reward novelty and punish verification.

A 2016 Nature survey found that over 70% of scientists have failed to reproduce another researcher’s experiments. But this isn’t about sloppy science or bad actors. It’s straightforward economics.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

The Researcher’s Optimization Problem

You have limited time and resources. You can either:

  1. Pursue novel findings → potential Nature paper, grant funding, tenure
  2. Replicate someone’s work → maybe a minor publication, minimal funding, colleagues questioning your creativity

The expected value calculation is obvious. Replication is a public good with privatized costs.

How NSF Review Panels Work

At NSF, I watched this play out in every review panel. Proposals to replicate existing work faced an uphill battle. Reviewers—themselves successful researchers who got there by publishing novel findings—naturally favor creative, untested ideas over verification work.

We tried various fixes. Some programs explicitly funded replication studies. Some review criteria emphasized robustness over novelty. But the core incentive remained: breakthrough science gets you the next grant; careful verification doesn’t.

The problem runs deeper than any single agency. Universities want prestigious publications. Journals want citations. Researchers want tenure. Nobody’s optimization function includes “produces reliable knowledge that someone else can build on.”

The Information Market Is Broken

Even when researchers try to replicate, they’re working with incomplete information. Methods sections in papers are sanitized versions of what actually happened in the lab. “Cells were cultured under standard conditions” means something different in every lab. One researcher’s gentle mixing is another’s vigorous shaking.

This information asymmetry makes replication attempts inherently inefficient. You’re trying to reproduce a result while missing critical details that the original researcher might not even realize mattered.

The Time Horizon Problem

NSF grants run 3-5 years. Tenure clocks run 6-7 years. But scientific truth emerges over decades. We’re optimizing for the wrong timescale.

During my time at NSF, I saw brilliant researchers make pragmatic choices: publish something surprising now (even if it might not hold up) rather than spend two years carefully verifying it. That’s not a moral failing—it’s responding rationally to the incentives we created.

What Would Actually Fix This

Make replication profitable:

  • Count verification studies equally with novel findings in grant review and tenure decisions
  • Fund researchers whose job is rigorous replication—make it a legitimate career path
  • Require data and detailed methods sharing as a funding condition, not an afterthought
  • Make failed replications as publishable as successful ones

The challenge isn’t technical. It’s institutional. We designed a market that overproduces flashy results and underproduces reliable knowledge. Until we fix the incentives, we’ll keep getting exactly what we’re paying for.

On Reproducibility: Physics versus Life Sciences

Photo by CaptainFrank_ on Pexels.com

Scientific reproducibility—the ability of researchers to obtain consistent results when repeating an experiment—sits at the heart of the scientific method. During my years at the bench and later as the leader of an Institute, it became clear that not all sciences struggle equally with this fundamental principle. Physics experiments tend to be more reproducible than those in life sciences, where researchers grapple with what many call a “reproducibility crisis.” Understanding why reveals something profound about the nature of these disciplines.

The State of Reproducibility Across Sciences

A 2016 Nature survey of over 1,500 researchers revealed the scope of the challenge: more than 70% of scientists have failed to reproduce another researcher’s experiments. The rates varied by field—87% of chemists, 77% of biologists, and 69% of physicists and engineers reported such failures. Notably, 52% of respondents agreed that a significant reproducibility crisis exists.

These numbers tell us something important: reproducibility challenges exist across all scientific disciplines, but they manifest with different severity. Physics hasn’t been immune to these issues, but it has been affected less severely than fields like psychology, clinical medicine, and biology. This isn’t a story of success versus failure—it’s a story of different sciences confronting different kinds of complexity.

The Physics Advantage

When a physicist measures the speed of light or the charge of an electron, they’re studying fundamental constants of nature. These values don’t change based on the lab, the researcher, or the day of the week. A particle accelerator in Geneva produces the same collision energies as one in Illinois. The laws governing pendulum motion work identically whether you’re in Cambridge or Kyoto.

This consistency extends beyond fundamental constants. Physics experiments typically involve controlled, isolated systems where researchers can eliminate or account for confounding variables. A physics experiment might study a single particle in a vacuum, far removed from the messy complexity of the real world. Precise measurement tools, refined over centuries, allow astonishing accuracy. NSF’s LIGO, for instance, can detect gravitational waves by measuring changes smaller than one ten-thousandth the width of a proton—equivalent to noticing a hair’s width change in the distance to the nearest star. The centuries of theoretical understanding that physics has developed makes the field less susceptible to reproducibility failures.

The Life Sciences Challenge

Life sciences researchers face a fundamentally different landscape. They’re not studying isolated particles obeying immutable laws; they’re investigating complex, adaptive systems shaped by evolution, environment, and chance.

Consider a seemingly simple experiment: testing how a drug affects cancer cells. Those cells aren’t uniform entities like electrons. Research has revealed extensive genetic variation across supposedly identical cancer cell lines. The same cell line obtained from different sources can show staggering differences—studies have found that at least 75% of compounds that strongly inhibit some strains of a cell line are completely inactive in others. Each cell line has accumulated unique mutations through genetic drift as they’re independently passaged in different laboratories.

The cells’ behavior changes based on how many times they’ve been cultured, what nutrients they receive, even the material of the culture dish. Research has documented profound variability even in highly standardized experiments, with factors like cell density, passage number, temperature, and medium composition all significantly affecting results. The researcher’s technique in handling the cells matters. Countless variables play roles that are difficult or impossible to fully control.

This complexity manifests in several ways:

Biological variability is the norm, not the exception. No two mice are identical, even if they’re genetically similar. Human patients are wildly variable. A treatment that works brilliantly for one person may fail completely for another with the “same” disease.

Emergent properties mean that biological systems exhibit behaviors that can’t be predicted simply by understanding their components. You can’t predict consciousness by studying individual neurons, just as you can’t predict ecosystem dynamics by studying single organisms.

Context dependence is paramount. A gene doesn’t have a single function—its effects depend on the organism, developmental stage, tissue type, and environmental conditions. The same protein can play entirely different roles in different contexts.

Reframing the “Crisis”

It’s worth questioning whether “crisis” is the right word for what’s happening in life sciences. Some researchers argue that the apparent reproducibility problem may be partly a statistical phenomenon. When fields explore bold, uncertain hypotheses—as life sciences often do—a certain rate of non-replication is expected and even healthy. A hypothesis that’s unlikely to be true a priori may still test positive, and subsequent studies revealing the truth represent science’s self-correcting mechanisms at work rather than a failure.

The complexity of biological systems means that two experiments may differ in ways researchers don’t fully understand, leading to different results not because of poor methodology but because of hidden variables or context sensitivity. This doesn’t excuse sloppy work, but it does suggest we should expect life sciences to have inherently lower replication rates than physics due to the nature of what’s being studied.

The Methodological Gap

These fundamental differences create practical challenges. Physics papers often provide enough detail for precise replication: “We used a 532nm laser with 10mW power at normal incidence…” Life sciences papers might say “cells were cultured under standard conditions”—but what’s “standard” varies between labs. One lab’s “gentle mixing” is another’s vigorous shaking.

The statistical approaches differ too. Physics can often work with small sample sizes because measurement precision is high and variability is low. Life sciences need larger samples to overcome biological variability, yet often work with small sample sizes due to cost, time, or ethical constraints. This makes studies underpowered and results less reliable.

Moving Forward

Recognition of reproducibility challenges has sparked essential reforms. Pre-registration of studies, open data sharing, more rigorous statistical practices, and standardized protocols all help. Some fields are developing reference cell lines and model organisms to reduce variability between labs. Journals are implementing checklists to ensure critical details are reported. These efforts are making a real difference.

Yet we must also accept that perfect reproducibility may be neither achievable nor always desirable in life sciences. Biological variability is a feature, not a bug—it’s the raw material of evolution and the reason life adapts to changing environments. The goal shouldn’t be to make biology as reproducible as physics, but to develop methods appropriate for studying complex, variable systems and to be transparent about the limitations and uncertainties inherent in this work.

Understanding the Divide

The reproducibility divide between physics and life sciences doesn’t reflect a failure in the life sciences. It reflects the reality that living systems are profoundly different from the physical systems that physicists study. Both approaches to science are valid and necessary; they’re simply tackling different kinds of problems with appropriately different tools.

Even physics, with all its advantages, sees nearly 70% of researchers unable to reproduce some experiments. The difference is one of degree, not kind. All science involves uncertainty, iteration, and gradual convergence on truth through many studies rather than single definitive experiments.

Understanding these differences helps us appreciate both the elegant precision of physics and the challenging complexity of life. And perhaps most importantly, it reminds us that the scientific method must be flexible enough to accommodate the full diversity of natural phenomena we seek to understand—from the fundamental particles that never change to the living systems that are constantly evolving.

The Unsung Hero: Why Exploratory Science Deserves Equal Billing with Hypothesis-Driven Research

For decades, the scientific method taught in classrooms has followed a neat, linear path: observe, hypothesize, test, conclude. This hypothesis-driven approach has become so deeply embedded in our understanding of “real science” that research proposals without clear hypotheses often struggle to secure funding. Yet some of the most transformative discoveries in history emerged not from testing predictions, but from simply looking carefully at what nature had to show us.

It’s time we recognize exploratory science—sometimes called discovery science or descriptive science—as equally valuable to its hypothesis-testing counterpart.

What Makes Exploratory Science Different?

Hypothesis-driven science starts with a specific question and a predicted answer. You think protein X causes disease Y, so you design experiments to prove or disprove that relationship. It’s focused, efficient, and satisfyingly definitive when it works.

Exploratory science takes a different approach. It asks “what’s out there?” rather than “is this specific thing true?” Researchers might sequence every gene in an organism, catalog every species in an ecosystem, or map every neuron in a brain region. They’re generating data and looking for patterns without knowing exactly what they’ll find.

The Case for Exploration

The history of science is filled with examples where exploration led to revolutionary breakthroughs. One of my lab chiefs at NIH was Craig Venter, famous for his exploratory project: sequencing the human genome. The Human Genome Project didn’t test a hypothesis—it mapped our entire genetic code, creating a foundation for countless subsequent discoveries. Darwin’s theory of evolution emerged from years of cataloging specimens and observing patterns, not from testing a pre-formed hypothesis. The periodic table organized elements based on exploratory observations before anyone understood atomic structure.

More recently, large-scale exploratory efforts have transformed entire fields. The Sloan Digital Sky Survey mapped millions of galaxies, revealing unexpected structures in the universe. CRISPR technology was discovered through exploratory studies of bacterial immune systems, not because anyone was looking for a gene-editing tool. The explosive growth of machine learning has been fueled by massive exploratory datasets that revealed patterns no human could have hypothesized in advance.

Why Exploration Matters Now More Than Ever

We’re living in an era of unprecedented technological capability. We can sequence genomes for hundreds of dollars, image living brains in real time, and collect environmental data from every corner of the planet. These tools make exploration more powerful and more necessary than ever.

Exploratory science excels at revealing what we don’t know we don’t know. When you’re testing a hypothesis, you’re limited by your current understanding. You can only ask questions you’re smart enough to think of. Exploratory approaches let the data surprise you, pointing toward phenomena you never imagined.

This is particularly crucial in complex systems—ecosystems, brains, economies, climate—where interactions are so intricate that predicting specific outcomes is nearly impossible. In these domains, careful observation and pattern recognition often outperform narrow hypothesis testing.

The Complementary Relationship

None of this diminishes the importance of hypothesis-driven science. Testing specific predictions remains essential for establishing causation, validating mechanisms, and building reliable knowledge. The most powerful scientific progress often comes from the interplay between exploration and hypothesis testing.

Exploratory work generates observations and patterns that inspire hypotheses. Hypothesis testing validates or refutes these ideas, often raising new questions that require more exploration. It’s a virtuous cycle, not a competition.

Overcoming the Bias

Despite its value, exploratory science often faces skepticism. It’s sometimes dismissed as “fishing expeditions” or “stamp collecting”—mere data gathering without intellectual rigor. This bias shows up in grant reviews, promotion decisions, and journal publications.

This prejudice is both unfair and counterproductive. Good exploratory science requires tremendous rigor in experimental design, data quality, and analysis. It demands sophisticated statistical approaches to avoid false patterns and careful validation of findings. The difference isn’t in rigor but in starting point.

We need funding mechanisms that support high-quality exploratory work without forcing researchers to shoehorn discovery-oriented projects into hypothesis-testing frameworks. We need to train scientists who can move fluidly between both modes. And we need to celebrate exploratory breakthroughs with the same enthusiasm we reserve for hypothesis confirmation.

Looking Forward

As science tackles increasingly complex challenges—understanding consciousness, predicting climate change, curing cancer—we’ll need every tool in our methodological toolkit. Exploratory science helps us map unknown territory, revealing features of reality we didn’t know existed. Hypothesis-driven science helps us understand the mechanisms behind what we’ve discovered.

Both approaches are essential. Both require creativity, rigor, and insight. And both deserve recognition as legitimate, valuable paths to understanding our world.

The next time you hear about a massive dataset, a comprehensive catalog, or a systematic survey, don’t dismiss it as “just descriptive.” Remember that today’s exploration creates the foundation for tomorrow’s breakthroughs. In science, as in geography, you can’t know where you’re going until you know where you are.

How America Built Its Science Foundation Before the War Changed Everything

Photo by Adarsh Rajput on Pexels.com

Most people think America’s scientific dominance began with the Manhattan Project or the space race. That’s not wrong, but it misses the real story. By the time World War II arrived, we’d already spent decades quietly building the infrastructure that would make those massive wartime projects possible.

The foundation was laid much earlier, and in ways that might surprise you. What’s more surprising is how close that foundation came to crumbling—and what we nearly lost along the way.

The Land-Grant Revolution

The story really starts in 1862 with the Morrill Act—arguably the most important piece of science policy legislation most Americans have never heard of. While the Civil War was tearing the country apart, Congress was simultaneously creating a network of universities designed to teach “agriculture and the mechanic arts.”

This wasn’t just about farming. The land-grant universities were America’s first systematic attempt to connect higher education with practical problem-solving. Schools like Cornell, Penn State, and the University of California weren’t just teaching Latin and philosophy—they were training engineers, studying crop diseases, and developing new manufacturing techniques.

But here’s what’s remarkable: this almost didn’t happen. The 1857 version of Morrill’s bill faced heavy opposition from Southern legislators who viewed it as federal overreach and Western states who objected to the population-based allocation formula. It passed both houses by narrow margins, only to be vetoed by President Buchanan. The legislation succeeded in 1862 primarily because Southern opponents had left Congress to join the Confederacy.

Private Money Fills a Critical Gap

What’s fascinating—and telling—is how much of early American scientific investment came from private philanthropy rather than government funding. The industrial fortunes of the late 1800s flowed into research, but this created a system entirely dependent on individual wealth and personal interest.

The Carnegie Institution of Washington, established in 1902, essentially functioned as America’s first NSF decades before the actual NSF existed. Andrew Carnegie’s $10 million endowment was enormous—equal to Harvard’s entire endowment and vastly more than what all American universities spent on basic research combined. The Rockefeller Foundation transformed medical education and research on a similar scale.

But imagine if Carnegie had been less interested in science, or if the robber baron fortunes had flowed entirely into art collections and European estates instead. This mixed ecosystem worked, but it was inherently unstable. When economic conditions tightened, private funding could vanish. When wealthy patrons died, research priorities shifted with their successors’ interests.

Corporate Labs: Innovation with Built-In Vulnerabilities

By the 1920s, major corporations were establishing research laboratories. General Electric’s lab, founded in 1900 as the first industrial research facility in America, became the model. Bell Labs, created in 1925 through the consolidation of AT&T and Western Electric research, would later become legendary for discoveries that shaped the modern world.

These corporate labs solved an important problem, bridging the gap between scientific discovery and commercial application. But they also created troubling dependencies. Research priorities followed profit potential, not necessarily national needs. Breakthrough discoveries in fundamental physics might be abandoned if they didn’t promise immediate commercial returns.

More concerning, these labs were vulnerable to economic cycles. During the Great Depression, even well-established research programs faced significant budget cuts and staffing reductions.

Government Stays Reluctantly on the Sidelines

Through all of this, the federal government remained a hesitant, minor player. The National Institute of Health, created in 1930 with a modest $750,000 for building construction, was one of the few exceptions—and even then, the federal government rarely funded medical research outside its own laboratories before 1938.

Most university science departments survived on whatever they could patch together from donors, industry partnerships, and minimal federal grants. The system worked, but precariously. During the Depression, university budgets were slashed, enrollment dropped, and research programs had to be scaled back or eliminated. The National Academy of Sciences saw its operating and maintenance funds drop by more than 15 percent each year during the early 1930s.

The Foundation That Held—Barely

By 1940, America had assembled what looked like a robust scientific infrastructure, but it was actually a precarious arrangement held together by fortunate timing and individual initiative. Strong universities teaching practical skills, generous private funding that could shift with economic conditions, corporate labs vulnerable to business cycles, and minimal federal involvement.

When the war suddenly demanded massive scientific mobilization, the infrastructure held together long enough to support the Manhattan Project, radar development, and other crucial innovations. But it was a closer thing than most people realize. The Depression had already demonstrated the system’s vulnerabilities—funding cuts, program reductions, and the constant uncertainty that came with depending on private largesse.

What We Nearly Lost

Looking back, what’s remarkable isn’t just how much America invested in science before 1940, but how easily much of it could have been lost to economic downturns, shifting private interests, or political opposition. That decentralized mix of public and private initiatives created innovation capacity, but it also created significant vulnerabilities.

The war didn’t just expand American science—it revealed how unstable our previous funding system had been and demonstrated what sustained, coordinated investment could accomplish. The scientific breakthroughs that defined the next half-century emerged not from the patchwork system of the 1930s, but from the sustained federal commitment that followed.

Today’s scientific leadership isn’t an accident of American ingenuity. It’s the direct result of lessons learned from a system that worked despite its fragility—and the decision to build something more reliable in its place. The question is whether we remember why that change was necessary, and what we might lose if we return to depending on unstable, decentralized funding for our most critical research needs.

Post lunch conversation with a colleague: trust in science

Yesterday, I had lunch with a colleague at a favorite BBQ spot in Arlington. Both of us work in science communication, so naturally our conversation drifted to the question that’s been nagging at many of us: why has public trust in scientific institutions declined in recent years? By the time we finished our, actually healthy food, we’d both come to the same conclusion—the current way scientists communicate with the public might be contributing to the problem.

From vaccine hesitancy to questions about research reliability, the relationship between science and society has grown more complex. To understand this dynamic, we need to examine not only what people think about science but also how different cultures approach the validation of knowledge itself.

Harvard scholar Sheila Jasanoff offers valuable insights through her concept of “civic epistemologies”—the cultural practices societies use to test and apply knowledge in public decision-making. These practices vary significantly across nations and help explain why scientific controversies unfold differently in different places.

American Approaches to Knowledge Validation

Jasanoff’s research identifies distinctive features of how Americans evaluate scientific claims:

Public Challenge: Americans tend to trust knowledge that has withstood open debate and questioning. This reflects legal traditions where competing arguments help reveal the truth.

Community Voice: There’s a strong expectation that affected groups should participate in discussions about scientific evidence that impacts them, particularly in policy contexts.

Open Access: Citizens expect transparency in how conclusions are reached, including access to underlying data and reasoning processes.

Multiple Perspectives: Rather than relying on single authoritative sources, Americans prefer hearing from various independent institutions and experts.

How This Shapes Science Communication

These cultural expectations help explain some recent communication challenges. When public health recommendations changed during the COVID-19 pandemic, this appeared to violate expectations for thorough prior testing of ideas. Similarly, when social platforms restricted specific discussions, this conflicted with preferences for open debate over gatekeeping.

In scientific fields like neuroscience, these dynamics have actually driven positive reforms. When research reliability issues emerged, the American response emphasized transparency solutions: open data sharing, study preregistration, and public peer review platforms. Major funding agencies now require data management plans that promote accountability.

Interestingly, other countries have addressed similar scientific quality concerns in different ways. European approaches have relied more on institutional reforms and expert committees, while American solutions have emphasized broader participation and transparent processes.

Digital Platforms and Knowledge

Online platforms have both satisfied and complicated American expectations. They provide the transparency and diverse voices people want, but the sheer volume of information makes careful evaluation difficult. Platforms like PubPeer enable post-publication scientific review that aligns with cultural preferences for ongoing scrutiny; however, the same openness can also amplify misleading information.

Building Better Science Communication

Understanding these cultural patterns suggests more effective approaches:

Acknowledge Uncertainty: Present science as an evolving process rather than a collection of final answers. This matches realistic expectations about how knowledge develops.

Create Meaningful Participation: Include affected communities in research priority-setting and policy discussions, following successful models in patient advocacy and environmental research.

Increase Transparency: Share reasoning processes and data openly. Open science practices align well with cultural expectations for accountability.

Recognize Broader Concerns: Understand that skepticism often reflects deeper questions about who participates in knowledge creation and whose interests are served.

Moving Forward

Public skepticism toward science isn’t simply a matter of misunderstanding—it often reflects tensions between scientific institutions and cultural expectations about legitimate authority. Rather than dismissing these expectations, we might develop communication approaches that honor both scientific rigor and democratic values.

The goal isn’t eliminating all skepticism, which serves essential functions in healthy societies. Instead, it channels critical thinking in ways that strengthen our collective ability to address complex challenges that require scientific insight.

Zero-based budgeting experiment: US STEM

Photo by Pixabay on Pexels.com

At research universities, zero-based budgeting is pretty rare. It means starting from zero expenditures and then justifying each budget line to reach an annual budget. It is frowned upon for long-term R&D projects for the apparent reason that it’s pretty challenging to predict a discovery that could be exploited to produce a measurable outcome.

Nevertheless, it’s worth considering using the process to optimize the entire US STEM/Biomedical enterprise from scratch.

Why Research Resists Zero-Based Budgeting

The resistance to zero-based budgeting in research environments stems from legitimate concerns. Academic institutions seldom adhere to a zero-based budget model because, as I stated above, scientific discovery is inherently unpredictable, and zero-based budgets require a significant amount of time and labor from units and university administrators to prepare, and this model can seriously encumber long-term planning.

Research requires substantial upfront investments in equipment, facilities, and human capital that only pay dividends over extended periods. The peer review system, while imperfect, has evolved as a way to allocate resources based on scientific merit rather than easily quantifiable metrics.

The Case for a National Reset

Despite these concerns, there’s a compelling argument for applying zero-based budgeting principles to the broader American STEM enterprise. Not at the individual project level, but at the systemic level—questioning fundamental assumptions about how we organize, fund, and conduct research.

Addressing Systemic Inefficiencies

Our current research ecosystem has evolved organically over decades, creating layers of bureaucracy, redundant administrative structures, and misaligned incentives. Universities compete for the same federal funding while maintaining parallel administrative infrastructures. A zero-based approach would force examination of whether these patterns serve our ultimate goals of scientific progress and national competitiveness.

Responding to Global Competition

The US still retains a healthy lead, spending $806 billion on R&D, both public and private, in 2021, but China is rapidly closing the gap. The Chinese government recently announced a massive $52 billion investment in research and development for 2024 — a 10% surge over the previous year, while the U.S. cut total investment in research and development for fiscal 2024 by 2.7%.

China had significantly increased its R&D investment, contributing over 24 percent of total global funding according to data from the Congressional Research Service, while the U.S. total remains strong, CRS data show that its share of total global expenditure dropped to just under 31 percent in 2020, down from nearly 40 percent in 2000.

Realigning with National Priorities

AI, pandemic preparedness, cybersecurity, and advanced manufacturing require coordinated, interdisciplinary approaches that don’t always fit neatly into existing departmental structures or funding categories. Starting from zero would allow us to design funding mechanisms that better align with strategic priorities while preserving fundamental research.

A Practical Framework

Implementing zero-based budgeting for the STEM enterprise could be approached systematically:

Phase 1: Comprehensive Mapping Begin by mapping the current research ecosystem—funding flows, personnel, infrastructure, outputs, and outcomes. This alone would be valuable, as we currently lack a complete picture of resource allocation.

Phase 2: Goal Setting Involve stakeholders in defining desired outcomes. What should American STEM research accomplish in the next 10-20 years? How do we balance basic research with applied research?

Phase 3: Pilot Implementation Rather than overhauling everything at once, implement zero-based approaches in specific domains or regions to identify what works while minimizing disruption.

Potential Benefits and Risks

A thoughtful application could yield improved efficiency by eliminating redundant processes, better alignment with national priorities, enhanced collaboration across institutional silos, and increased agility to respond to emerging threats.

However, any major reform involves significant risks. There’s danger of disrupting productive research programs, alienating talented researchers, or creating unintended bureaucratic complications. The political and logistical challenges would be immense.

Moreover, China has now surpassed the US in “STEM talent production, research publications, patents, and knowledge-and technology-intensive manufacturing”, suggesting that while spending matters, other factors are equally important.

Preserving What Works

Zero-based budgeting shouldn’t mean discarding what has made American research successful. The peer review system has generally identified quality research. The tradition of investigator-initiated research has fostered creativity and serendipitous discoveries. The partnership between universities, government, and industry has created a dynamic innovation ecosystem.

The goal isn’t elimination but examination of whether these elements are being implemented most effectively.

Conclusion

The idea of applying zero-based budgeting to American STEM research deserves serious consideration. By questioning assumptions, eliminating inefficiencies, and realigning priorities, we can create a research enterprise better positioned to tackle 21st-century challenges.

The process itself—careful examination of how we conduct and fund research—could be as valuable as specific reforms. In an era when Based on current enrollment patterns, China is projected to produce more than 77,000 STEM PhD graduates per year compared to approximately 40,000 in the United States by 2025, representing nearly double the US output., the ability to thoughtfully reimagine our institutions may be our greatest asset.

The question isn’t whether we can afford to undertake such a comprehensive review. The question is whether we can afford not to.

Bold Ventures in Science: NSF’s NEON and NIH’s BRAIN Initiative

My favorite projects…

As loyal readers know, these are my two favorite science initiatives. They stand out as beacons of progress: the National Science Foundation’s National Ecological Observatory Network (NEON) and the National Institutes of Health’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. These groundbreaking endeavors showcase the commitment of U.S. science agencies to tackling complex, large-scale challenges that could revolutionize our understanding of the world around us and within us.

NSF’s NEON: A Continental-Scale View of Ecology

Imagine having a window into the ecological processes of an entire continent. That’s precisely what NEON aims to provide. Initiated in 2011, this audacious project is creating a network of ecological observatories spanning the United States, including Alaska, Hawaii, and Puerto Rico.

Yes, NEON has faced its share of challenges. The project’s timeline and budget have been adjusted since its inception, growing from an initial estimate of $434 million to around $469 million, with completion delayed from 2016 to 2019. But let’s be honest: when did you last try to build a comprehensive ecological monitoring system covering an entire continent? These adjustments reflected the project’s complexity and the learning curve in such a pioneering endeavor.

The payoff? NEON is now collecting standardized ecological data across 81 field sites from Hawaii to Puerto Rico and in between. This massive time series in some 200 dimensions will allow scientists to analyze and forecast ecological changes over decades. From tracking the impacts of climate change to understanding biodiversity shifts, NEON provides invaluable insights that could shape environmental policy and conservation efforts for future generations.

NIH’s BRAIN Initiative: Decoding Our Most Complex Organ

Meanwhile, the NIH’s BRAIN Initiative is taking on an equally monumental task: mapping the human brain. Launched in 2013, this project is aptly named, as it requires a lot of brains to understand… well, brains.

With annual funding that has grown from an initial $100 million to over $500 million, the BRAIN Initiative is a testament to the NIH’s commitment to unraveling the mysteries of neuroscience. Mapping all 86 billion neurons in the human brain by 2026 might seem a tad optimistic. But I’m increasingly impressed with our progress, and I am hopeful we’ll be able to get some meaningful statistics about variability across individuals.

The initiative has already led to the development of new technologies for studying brain activity, potential treatments for conditions like Parkinson’s disease, and insights into how our brains process information. It’s like a real-life adventure into the final frontier, except instead of outer space, we’re exploring the inner space of our skulls.

The Challenges: More Feature Than Bug

Both NEON and the BRAIN Initiative have faced obstacles, from budget adjustments to timeline extensions. But in the world of cutting-edge science, these challenges are often where the real learning happens. They’ve pushed scientists to innovate, collaborate, and think outside the box (or skull, in the case of BRAIN).

These projects have also created unique opportunities for researchers to develop new skills. Grant writing for these initiatives isn’t just an administrative hurdle; it’s a chance to think big and connect individual research to grand, overarching goals. It’s turning scientists into visionaries, and isn’t that worth a few late nights and extra cups of coffee?

Conclusion: Big Science, Bigger Possibilities

NEON and the BRAIN Initiative represent more than just large-scale scientific projects. They’re bold statements about the value of basic research and the importance of tackling complex, long-term challenges. They remind us that some questions are too big for any single lab or institution to answer alone.

As these projects evolve and produce data, they’re not just advancing our understanding of ecology and neuroscience. They’re also creating models for conducting science at a grand scale, paving the way for future ambitious endeavors.

So here’s to the scientists, administrators, and visionaries behind NEON and the BRAIN Initiative. They’re showing us that with enough creativity, persistence, and, yes, funding, we can tackle some of the biggest questions in science. And who knows? The next breakthrough in saving our planet or understanding consciousness could be hidden in the data they’re collecting right now.

How to reform NIH…

Recently, I’ve mostly written in this respect about the NSF, but I also spent six years at the NIH, as a staff fellow in the intramural program (the biomedical medical center in Bethesda Maryland). When most folks think about the NIH, they are not really focussing on the intramural program. Rather, it’s the extramural program that gives out grant awards to biomedical researchers at US Colleges and Medical Centers that gets the attention. And I guess that’s fine because the extramural program represents about 90% of the NIH budget.

But, if I were going to magically reform the agency, I would focus on the intramural program. That’s because it has so much potential. With an annual budget north of $4B/year, America’s largest research medical center and thousands of young researchers from all over the world, it has so much potential. If Woods Hole is a summer nexus for life sciences during the summer, the NIH Bethesda campus is that thing on steroids year round.

The special sauce for the intramural program is that ideas can become experiments and then discoveries without the usual intermediate step of writing a proposal and waiting to see if it was funded. When I was at NIH, I could literally conceive of a new experiment, order the equipment and reagents and publish the results several months later. Hence, the intramural program has the structure in place to be a major science accelerator.

But, for some reason, when we think of such science accelerators, we generally consider private institutions like HHMI, the Allen Institutes and perhaps the Institute for Advanced Study in Princeton. What about NIH? On the criteria of critical mass, it dwarfs those places.

To my mind the problem lies in NIH’s ‘articles of confederation’ nature: it’s really 27 (or so) different Institutes and other units that are largely quite independent (especially the NCI), with a relatively weak central leadership. And this weak confederation organization plays out, not only on the Hill or in the awarding of extramural awards, but crucially also on the Bethesda campus, where intramural institute program directors rule fiefdoms that are more insular than academic units on a college campus. And this weak organizational architecture acts in the opposite direction of the science accelerator advantage that I wrote about above.

So here’s a big idea: let’s make the intramural program it’s own effective NIH institute. And have Congress authorize it and fund it separately, as a high risk, high payoff biomedical research program for the country. Does that sound like ARPA-H? Ooops. Well, then maybe we should just give the Bethesda campus to ARPA-H.