“What Grant Reviewers Actually Look For (and What They Ignore)”

A close colleague of mine at a major US research university begins the process of preparing a grant proposal by creating something he calls a “storyboard”.  When I was growing up in LA, the concept of a storyboard was very familiar to me.  Many of my high school friends, at the time, aspired to careers in the locally dominant entertainment industry. The storyboard, invented by Walt Disney, used pictures to visualize a movie’s plot flow before production—often even before a screenplay was complete. In the LA movie business, you could look at a storyboard and pretty much get right away what a movie is about.

Back to the colleague of mine who uses storyboard to create grant proposals—his key idea is that you’re done making the storyboard, when someone outside the group can come in, look at it, and come away with a good understanding of what the grant is all about. If the storyboard is coherent, then it’s easy to make the proposal coherent as well. Further, the storyboard often gets reused in a modified fashion as the grant’s central graphic. Yes, a picture is worth several thousand words.

My colleague is onto something profound about how grant review works, across all funders, including those in the private sector.  But for this issue of Science Policy Insider, we’re going to consider the agency where I headed up Biological Sciences, the NSF. What about NIH, you may ask? A lot of the principles here go for both agencies. But here, we’re going to focus, laser-like, on the National Science Foundation, even as it undergoes drastic changes.

The Brutal Reality of NSF Panel Review

After sitting through too many grant panels at NSF, I can tell you this: most proposals get 15-20 minutes of discussion time in a panel that’s reviewing 30-50 proposals over three days. Your carefully crafted 15-page research plan? The primary reviewer read it thoroughly. The other two panelists skimmed it. Everyone else glanced at the summary.

This isn’t because reviewers are lazy. They’re exhausted, brilliant researchers who read proposals outside their immediate expertise, often late at night, while also worrying about their own grants, their trainees, and the paper referee statements they owe.

The storyboard approach works because it acknowledges this reality: reviewers are looking for a straightforward narrative they can grasp quickly and defend to the panel.

What Actually Happens in Review Panels

Here’s how it typically unfolds:

9:00 AM, Day Two of panel: The primary reviewer presents your proposal. They have 5 minutes to summarize your aims, approach, and why it matters. If they struggle to articulate your story coherently, you’re in trouble—not because your proposed science is bad, but because they can’t effectively advocate for you.

The secondary and tertiary reviewers add their perspectives. Then the panel discusses. The program officers watch for enthusiasm, coherence of the argument, and whether anyone is deeply opposed.

The proposals that succeed have champions—reviewers who “get it” immediately and can explain why it matters to others. The storyboard method facilitates championing reviewability.

What Reviewers Actually Look For

After watching this process play out thousands of times, here’s what I learned reviewers truly care about:

1. Can I explain this to the panel in 3 minutes?

If your research plan requires a flowchart to understand, the primary reviewer will simplify it—possibly incorrectly. Better to give them the simplified version yourself.

2. Is the question worth answering?

Not “is this interesting?” but “will anyone care about the answer?” Reviewers need to justify spending taxpayer money. Give them that justification explicitly.

3. Can this person actually do this?

No matter what is written down in the solicitation, preliminary data matters enormously, but not for the reason applicants think. It’s not about proving the hypothesis—it’s about proving you have the technical capability and haven’t missed an obvious problem.

4. Is this the right approach?

Reviewers are surprisingly forgiving about whether your specific hypothesis is correct. They’re much less forgiving about whether you’re using appropriate methods or have thought through alternatives.

5. Will this move the field forward?

Notice: not “revolutionize” or “transform”—just move forward. Incremental progress from a well-designed study beats a transformative idea with unclear methods. But doesn’t the call state that the proposed work should change the world? Sure, but from a practical standpoint, what counts for the reviewers is steady progress. And here’s the tricky part: while steady is key for the reviewers, transformative really is important for the program officers who make the penultimate decision. So, a balance is necessary.

What Reviewers Ignore (Even Though You Spent Weeks on It)

The extensive literature review: They skim it to see if you know the field. The 47 citations demonstrating your comprehensive knowledge? They checked that you cited the key papers and moved on.

Your detailed budget justification: Unless something looks wildly off, reviewers assume you know what your research costs. The line-by-line explanation of why you need that particular microscope? Skimmed.

Your publication list: They look at: Do you publish in good journals? Are you productive? Have you published on this topic before? That’s it. The distinction between your 47th and 52nd paper doesn’t matter.

The broader impacts section that you agonized over: I feel guilty about this because, I’ve often harped about broader impacts as a central criterion. Truth: most reviewers read this quickly to verify you addressed it competently. Unless it’s either exceptional or terrible, it rarely drives funding decisions. And these days, broader impacts means how the work will benefit all American citizens (think public health) or US National security.

The Elements That Actually Drive Decisions

Clarity of the research goals: Can the reviewer recite your three main questions without looking at the proposal? If not, rewrite.

Logical flow: Does each aim build on the previous one? Or are they three unrelated projects stapled together? Reviewers can tell.

Feasibility signals: Preliminary data, established collaborations, access to necessary resources, realistic timeline. These say, “this person will actually complete this work.”

Positioning: Is this filling a real gap, or are you slightly tweaking someone else’s approach? Reviewers want to fund work that moves us somewhere new, even if incrementally.

The writing quality: Clear, direct prose suggests clear thinking. Dense, jargon-heavy writing suggests unclear thinking (even if that’s unfair).

The Most Common Mistake

Applicants try to impress reviewers with complexity and comprehensiveness. They want to show they’ve thought of everything, considered every alternative, read every paper.

But reviewers are looking for clarity and confidence. They want to understand quickly what you’re proposing and why it matters. They want to feel confident you’ll succeed.

The storyboard method works because it forces simplicity. If you can’t draw a simple picture of your proposal that an outsider immediately understands, you don’t have a fundable story yet.

But Wait, There’s More

As hinted at above, at NSF, that panel review…. it’s strictly advisory. I’ve personally seen proposals with excellent reviews get declined and the reverse. The key decisional person? That’s the cognizant program officer for the solicitation. These days, there’s an additional vetting to look for alignment with the Administration’s political goals, but that’s a topic for a future newsletter.

What This Means for Your Proposal

Before you write a single word:

  • Can you explain your project in three sentences?
  • Can someone outside your subfield understand why it matters?
  • Do you have a clear narrative arc from question to approach to impact?

If not, you’re not ready to write. You’re ready to storyboard.

Build the simple, clear story first. Then elaborate carefully, making sure every detail serves that core narrative.

Reviewers are smart, busy people trying to identify good science under time pressure. Don’t make them work to understand your brilliance. Give them a story they can grasp, defend, and champion.

That’s what my colleague understood. And based on his funding success rate, the reviewers appreciate it.

From Andy Kessler at WSJ: Beyond the R1 University

My undergraduate class has been considering this problem as part of their midterm paper, here. My reading for some time has been that the authors of Project 2025 are aware of Science: The Endless Frontier and reject its usefulness for today. Kessler’s op ed in today’s WSJ opens a window into an alternative that represents a scaled-up up of what existed before the Second World War in institutions such as Bell Labs and the Carnegie Institution of Washington.

Great analysis of the current financial outlook for US academic institutions…

It’s here, on Substack.

About the authors:

“Finding Equilibrium” is coauthored by Jay Akridge, Professor of Agricultural Economics, Trustee Chair in Teaching and Learning Excellence, and Provost Emeritus at Purdue University and David Hummels, Distinguished Professor of Economics and Dean Emeritus at the Daniels School of Business at Purdue. Research assistance on this post was provided by Yixuan Liu.

The Replication Crisis Is a Market Failure (And We Designed It That Way)

Also published on my newsletter

The replication crisis isn’t a mystery. After presiding over the review for thousands of grants at NSF’s Biological Sciences Directorate, I can tell you exactly why science struggles to reproduce its own findings: we built incentives that reward novelty and punish verification.

A 2016 Nature survey found that over 70% of scientists have failed to reproduce another researcher’s experiments. But this isn’t about sloppy science or bad actors. It’s straightforward economics.

Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.

The Researcher’s Optimization Problem

You have limited time and resources. You can either:

  1. Pursue novel findings → potential Nature paper, grant funding, tenure
  2. Replicate someone’s work → maybe a minor publication, minimal funding, colleagues questioning your creativity

The expected value calculation is obvious. Replication is a public good with privatized costs.

How NSF Review Panels Work

At NSF, I watched this play out in every review panel. Proposals to replicate existing work faced an uphill battle. Reviewers—themselves successful researchers who got there by publishing novel findings—naturally favor creative, untested ideas over verification work.

We tried various fixes. Some programs explicitly funded replication studies. Some review criteria emphasized robustness over novelty. But the core incentive remained: breakthrough science gets you the next grant; careful verification doesn’t.

The problem runs deeper than any single agency. Universities want prestigious publications. Journals want citations. Researchers want tenure. Nobody’s optimization function includes “produces reliable knowledge that someone else can build on.”

The Information Market Is Broken

Even when researchers try to replicate, they’re working with incomplete information. Methods sections in papers are sanitized versions of what actually happened in the lab. “Cells were cultured under standard conditions” means something different in every lab. One researcher’s gentle mixing is another’s vigorous shaking.

This information asymmetry makes replication attempts inherently inefficient. You’re trying to reproduce a result while missing critical details that the original researcher might not even realize mattered.

The Time Horizon Problem

NSF grants run 3-5 years. Tenure clocks run 6-7 years. But scientific truth emerges over decades. We’re optimizing for the wrong timescale.

During my time at NSF, I saw brilliant researchers make pragmatic choices: publish something surprising now (even if it might not hold up) rather than spend two years carefully verifying it. That’s not a moral failing—it’s responding rationally to the incentives we created.

What Would Actually Fix This

Make replication profitable:

  • Count verification studies equally with novel findings in grant review and tenure decisions
  • Fund researchers whose job is rigorous replication—make it a legitimate career path
  • Require data and detailed methods sharing as a funding condition, not an afterthought
  • Make failed replications as publishable as successful ones

The challenge isn’t technical. It’s institutional. We designed a market that overproduces flashy results and underproduces reliable knowledge. Until we fix the incentives, we’ll keep getting exactly what we’re paying for.

On Reproducibility: Physics versus Life Sciences

Photo by CaptainFrank_ on Pexels.com

Scientific reproducibility—the ability of researchers to obtain consistent results when repeating an experiment—sits at the heart of the scientific method. During my years at the bench and later as the leader of an Institute, it became clear that not all sciences struggle equally with this fundamental principle. Physics experiments tend to be more reproducible than those in life sciences, where researchers grapple with what many call a “reproducibility crisis.” Understanding why reveals something profound about the nature of these disciplines.

The State of Reproducibility Across Sciences

A 2016 Nature survey of over 1,500 researchers revealed the scope of the challenge: more than 70% of scientists have failed to reproduce another researcher’s experiments. The rates varied by field—87% of chemists, 77% of biologists, and 69% of physicists and engineers reported such failures. Notably, 52% of respondents agreed that a significant reproducibility crisis exists.

These numbers tell us something important: reproducibility challenges exist across all scientific disciplines, but they manifest with different severity. Physics hasn’t been immune to these issues, but it has been affected less severely than fields like psychology, clinical medicine, and biology. This isn’t a story of success versus failure—it’s a story of different sciences confronting different kinds of complexity.

The Physics Advantage

When a physicist measures the speed of light or the charge of an electron, they’re studying fundamental constants of nature. These values don’t change based on the lab, the researcher, or the day of the week. A particle accelerator in Geneva produces the same collision energies as one in Illinois. The laws governing pendulum motion work identically whether you’re in Cambridge or Kyoto.

This consistency extends beyond fundamental constants. Physics experiments typically involve controlled, isolated systems where researchers can eliminate or account for confounding variables. A physics experiment might study a single particle in a vacuum, far removed from the messy complexity of the real world. Precise measurement tools, refined over centuries, allow astonishing accuracy. NSF’s LIGO, for instance, can detect gravitational waves by measuring changes smaller than one ten-thousandth the width of a proton—equivalent to noticing a hair’s width change in the distance to the nearest star. The centuries of theoretical understanding that physics has developed makes the field less susceptible to reproducibility failures.

The Life Sciences Challenge

Life sciences researchers face a fundamentally different landscape. They’re not studying isolated particles obeying immutable laws; they’re investigating complex, adaptive systems shaped by evolution, environment, and chance.

Consider a seemingly simple experiment: testing how a drug affects cancer cells. Those cells aren’t uniform entities like electrons. Research has revealed extensive genetic variation across supposedly identical cancer cell lines. The same cell line obtained from different sources can show staggering differences—studies have found that at least 75% of compounds that strongly inhibit some strains of a cell line are completely inactive in others. Each cell line has accumulated unique mutations through genetic drift as they’re independently passaged in different laboratories.

The cells’ behavior changes based on how many times they’ve been cultured, what nutrients they receive, even the material of the culture dish. Research has documented profound variability even in highly standardized experiments, with factors like cell density, passage number, temperature, and medium composition all significantly affecting results. The researcher’s technique in handling the cells matters. Countless variables play roles that are difficult or impossible to fully control.

This complexity manifests in several ways:

Biological variability is the norm, not the exception. No two mice are identical, even if they’re genetically similar. Human patients are wildly variable. A treatment that works brilliantly for one person may fail completely for another with the “same” disease.

Emergent properties mean that biological systems exhibit behaviors that can’t be predicted simply by understanding their components. You can’t predict consciousness by studying individual neurons, just as you can’t predict ecosystem dynamics by studying single organisms.

Context dependence is paramount. A gene doesn’t have a single function—its effects depend on the organism, developmental stage, tissue type, and environmental conditions. The same protein can play entirely different roles in different contexts.

Reframing the “Crisis”

It’s worth questioning whether “crisis” is the right word for what’s happening in life sciences. Some researchers argue that the apparent reproducibility problem may be partly a statistical phenomenon. When fields explore bold, uncertain hypotheses—as life sciences often do—a certain rate of non-replication is expected and even healthy. A hypothesis that’s unlikely to be true a priori may still test positive, and subsequent studies revealing the truth represent science’s self-correcting mechanisms at work rather than a failure.

The complexity of biological systems means that two experiments may differ in ways researchers don’t fully understand, leading to different results not because of poor methodology but because of hidden variables or context sensitivity. This doesn’t excuse sloppy work, but it does suggest we should expect life sciences to have inherently lower replication rates than physics due to the nature of what’s being studied.

The Methodological Gap

These fundamental differences create practical challenges. Physics papers often provide enough detail for precise replication: “We used a 532nm laser with 10mW power at normal incidence…” Life sciences papers might say “cells were cultured under standard conditions”—but what’s “standard” varies between labs. One lab’s “gentle mixing” is another’s vigorous shaking.

The statistical approaches differ too. Physics can often work with small sample sizes because measurement precision is high and variability is low. Life sciences need larger samples to overcome biological variability, yet often work with small sample sizes due to cost, time, or ethical constraints. This makes studies underpowered and results less reliable.

Moving Forward

Recognition of reproducibility challenges has sparked essential reforms. Pre-registration of studies, open data sharing, more rigorous statistical practices, and standardized protocols all help. Some fields are developing reference cell lines and model organisms to reduce variability between labs. Journals are implementing checklists to ensure critical details are reported. These efforts are making a real difference.

Yet we must also accept that perfect reproducibility may be neither achievable nor always desirable in life sciences. Biological variability is a feature, not a bug—it’s the raw material of evolution and the reason life adapts to changing environments. The goal shouldn’t be to make biology as reproducible as physics, but to develop methods appropriate for studying complex, variable systems and to be transparent about the limitations and uncertainties inherent in this work.

Understanding the Divide

The reproducibility divide between physics and life sciences doesn’t reflect a failure in the life sciences. It reflects the reality that living systems are profoundly different from the physical systems that physicists study. Both approaches to science are valid and necessary; they’re simply tackling different kinds of problems with appropriately different tools.

Even physics, with all its advantages, sees nearly 70% of researchers unable to reproduce some experiments. The difference is one of degree, not kind. All science involves uncertainty, iteration, and gradual convergence on truth through many studies rather than single definitive experiments.

Understanding these differences helps us appreciate both the elegant precision of physics and the challenging complexity of life. And perhaps most importantly, it reminds us that the scientific method must be flexible enough to accommodate the full diversity of natural phenomena we seek to understand—from the fundamental particles that never change to the living systems that are constantly evolving.

Dan Reed on printing out his thesis (among other things)

A perfect blog entry for those of us who entered the computer era about the same time, here. As an addendum, that thesis format check was also a nightmare at Michigan. One of my fellow graduate students was reduced to printing out the same page of content multiple times in the middle of the night on our advisor’s printer so that he could pass the check before completing the content.

The Unsung Hero: Why Exploratory Science Deserves Equal Billing with Hypothesis-Driven Research

For decades, the scientific method taught in classrooms has followed a neat, linear path: observe, hypothesize, test, conclude. This hypothesis-driven approach has become so deeply embedded in our understanding of “real science” that research proposals without clear hypotheses often struggle to secure funding. Yet some of the most transformative discoveries in history emerged not from testing predictions, but from simply looking carefully at what nature had to show us.

It’s time we recognize exploratory science—sometimes called discovery science or descriptive science—as equally valuable to its hypothesis-testing counterpart.

What Makes Exploratory Science Different?

Hypothesis-driven science starts with a specific question and a predicted answer. You think protein X causes disease Y, so you design experiments to prove or disprove that relationship. It’s focused, efficient, and satisfyingly definitive when it works.

Exploratory science takes a different approach. It asks “what’s out there?” rather than “is this specific thing true?” Researchers might sequence every gene in an organism, catalog every species in an ecosystem, or map every neuron in a brain region. They’re generating data and looking for patterns without knowing exactly what they’ll find.

The Case for Exploration

The history of science is filled with examples where exploration led to revolutionary breakthroughs. One of my lab chiefs at NIH was Craig Venter, famous for his exploratory project: sequencing the human genome. The Human Genome Project didn’t test a hypothesis—it mapped our entire genetic code, creating a foundation for countless subsequent discoveries. Darwin’s theory of evolution emerged from years of cataloging specimens and observing patterns, not from testing a pre-formed hypothesis. The periodic table organized elements based on exploratory observations before anyone understood atomic structure.

More recently, large-scale exploratory efforts have transformed entire fields. The Sloan Digital Sky Survey mapped millions of galaxies, revealing unexpected structures in the universe. CRISPR technology was discovered through exploratory studies of bacterial immune systems, not because anyone was looking for a gene-editing tool. The explosive growth of machine learning has been fueled by massive exploratory datasets that revealed patterns no human could have hypothesized in advance.

Why Exploration Matters Now More Than Ever

We’re living in an era of unprecedented technological capability. We can sequence genomes for hundreds of dollars, image living brains in real time, and collect environmental data from every corner of the planet. These tools make exploration more powerful and more necessary than ever.

Exploratory science excels at revealing what we don’t know we don’t know. When you’re testing a hypothesis, you’re limited by your current understanding. You can only ask questions you’re smart enough to think of. Exploratory approaches let the data surprise you, pointing toward phenomena you never imagined.

This is particularly crucial in complex systems—ecosystems, brains, economies, climate—where interactions are so intricate that predicting specific outcomes is nearly impossible. In these domains, careful observation and pattern recognition often outperform narrow hypothesis testing.

The Complementary Relationship

None of this diminishes the importance of hypothesis-driven science. Testing specific predictions remains essential for establishing causation, validating mechanisms, and building reliable knowledge. The most powerful scientific progress often comes from the interplay between exploration and hypothesis testing.

Exploratory work generates observations and patterns that inspire hypotheses. Hypothesis testing validates or refutes these ideas, often raising new questions that require more exploration. It’s a virtuous cycle, not a competition.

Overcoming the Bias

Despite its value, exploratory science often faces skepticism. It’s sometimes dismissed as “fishing expeditions” or “stamp collecting”—mere data gathering without intellectual rigor. This bias shows up in grant reviews, promotion decisions, and journal publications.

This prejudice is both unfair and counterproductive. Good exploratory science requires tremendous rigor in experimental design, data quality, and analysis. It demands sophisticated statistical approaches to avoid false patterns and careful validation of findings. The difference isn’t in rigor but in starting point.

We need funding mechanisms that support high-quality exploratory work without forcing researchers to shoehorn discovery-oriented projects into hypothesis-testing frameworks. We need to train scientists who can move fluidly between both modes. And we need to celebrate exploratory breakthroughs with the same enthusiasm we reserve for hypothesis confirmation.

Looking Forward

As science tackles increasingly complex challenges—understanding consciousness, predicting climate change, curing cancer—we’ll need every tool in our methodological toolkit. Exploratory science helps us map unknown territory, revealing features of reality we didn’t know existed. Hypothesis-driven science helps us understand the mechanisms behind what we’ve discovered.

Both approaches are essential. Both require creativity, rigor, and insight. And both deserve recognition as legitimate, valuable paths to understanding our world.

The next time you hear about a massive dataset, a comprehensive catalog, or a systematic survey, don’t dismiss it as “just descriptive.” Remember that today’s exploration creates the foundation for tomorrow’s breakthroughs. In science, as in geography, you can’t know where you’re going until you know where you are.

Cellular Digital Twins…

I’ve been intrigued with this technology for some time from the standpoint of cell biology. When a healthy cell undergoes cancer transformation or metastasis, we are looking at a phase shift type of change where massive complexity comes into play. Cellular digital twins that incorporate the vast amount of data from technologies such as RNA-seq, multiphoton imaging, and proteomics can now be quite high-fidelity. Simulating such disease-related phenotypic changes may be incredibly useful for providing insights into the cell as a complex adaptive system, while also generating hypotheses for future experiments.

Beyond digital twins as they currently exist is the idea of AI world models, where the worlds are individual cells or a cell network. In that case, I could imagine a cell biologist using natural language to create an experimental initial condition and then simulate the time evolution of the world as an in silico experiment — how cool!

Although, as with all digital twins, we need to experimentally test in the real world. Trust, but verify.

How America Built Its Science Foundation Before the War Changed Everything

Photo by Adarsh Rajput on Pexels.com

Most people think America’s scientific dominance began with the Manhattan Project or the space race. That’s not wrong, but it misses the real story. By the time World War II arrived, we’d already spent decades quietly building the infrastructure that would make those massive wartime projects possible.

The foundation was laid much earlier, and in ways that might surprise you. What’s more surprising is how close that foundation came to crumbling—and what we nearly lost along the way.

The Land-Grant Revolution

The story really starts in 1862 with the Morrill Act—arguably the most important piece of science policy legislation most Americans have never heard of. While the Civil War was tearing the country apart, Congress was simultaneously creating a network of universities designed to teach “agriculture and the mechanic arts.”

This wasn’t just about farming. The land-grant universities were America’s first systematic attempt to connect higher education with practical problem-solving. Schools like Cornell, Penn State, and the University of California weren’t just teaching Latin and philosophy—they were training engineers, studying crop diseases, and developing new manufacturing techniques.

But here’s what’s remarkable: this almost didn’t happen. The 1857 version of Morrill’s bill faced heavy opposition from Southern legislators who viewed it as federal overreach and Western states who objected to the population-based allocation formula. It passed both houses by narrow margins, only to be vetoed by President Buchanan. The legislation succeeded in 1862 primarily because Southern opponents had left Congress to join the Confederacy.

Private Money Fills a Critical Gap

What’s fascinating—and telling—is how much of early American scientific investment came from private philanthropy rather than government funding. The industrial fortunes of the late 1800s flowed into research, but this created a system entirely dependent on individual wealth and personal interest.

The Carnegie Institution of Washington, established in 1902, essentially functioned as America’s first NSF decades before the actual NSF existed. Andrew Carnegie’s $10 million endowment was enormous—equal to Harvard’s entire endowment and vastly more than what all American universities spent on basic research combined. The Rockefeller Foundation transformed medical education and research on a similar scale.

But imagine if Carnegie had been less interested in science, or if the robber baron fortunes had flowed entirely into art collections and European estates instead. This mixed ecosystem worked, but it was inherently unstable. When economic conditions tightened, private funding could vanish. When wealthy patrons died, research priorities shifted with their successors’ interests.

Corporate Labs: Innovation with Built-In Vulnerabilities

By the 1920s, major corporations were establishing research laboratories. General Electric’s lab, founded in 1900 as the first industrial research facility in America, became the model. Bell Labs, created in 1925 through the consolidation of AT&T and Western Electric research, would later become legendary for discoveries that shaped the modern world.

These corporate labs solved an important problem, bridging the gap between scientific discovery and commercial application. But they also created troubling dependencies. Research priorities followed profit potential, not necessarily national needs. Breakthrough discoveries in fundamental physics might be abandoned if they didn’t promise immediate commercial returns.

More concerning, these labs were vulnerable to economic cycles. During the Great Depression, even well-established research programs faced significant budget cuts and staffing reductions.

Government Stays Reluctantly on the Sidelines

Through all of this, the federal government remained a hesitant, minor player. The National Institute of Health, created in 1930 with a modest $750,000 for building construction, was one of the few exceptions—and even then, the federal government rarely funded medical research outside its own laboratories before 1938.

Most university science departments survived on whatever they could patch together from donors, industry partnerships, and minimal federal grants. The system worked, but precariously. During the Depression, university budgets were slashed, enrollment dropped, and research programs had to be scaled back or eliminated. The National Academy of Sciences saw its operating and maintenance funds drop by more than 15 percent each year during the early 1930s.

The Foundation That Held—Barely

By 1940, America had assembled what looked like a robust scientific infrastructure, but it was actually a precarious arrangement held together by fortunate timing and individual initiative. Strong universities teaching practical skills, generous private funding that could shift with economic conditions, corporate labs vulnerable to business cycles, and minimal federal involvement.

When the war suddenly demanded massive scientific mobilization, the infrastructure held together long enough to support the Manhattan Project, radar development, and other crucial innovations. But it was a closer thing than most people realize. The Depression had already demonstrated the system’s vulnerabilities—funding cuts, program reductions, and the constant uncertainty that came with depending on private largesse.

What We Nearly Lost

Looking back, what’s remarkable isn’t just how much America invested in science before 1940, but how easily much of it could have been lost to economic downturns, shifting private interests, or political opposition. That decentralized mix of public and private initiatives created innovation capacity, but it also created significant vulnerabilities.

The war didn’t just expand American science—it revealed how unstable our previous funding system had been and demonstrated what sustained, coordinated investment could accomplish. The scientific breakthroughs that defined the next half-century emerged not from the patchwork system of the 1930s, but from the sustained federal commitment that followed.

Today’s scientific leadership isn’t an accident of American ingenuity. It’s the direct result of lessons learned from a system that worked despite its fragility—and the decision to build something more reliable in its place. The question is whether we remember why that change was necessary, and what we might lose if we return to depending on unstable, decentralized funding for our most critical research needs.

Transformer models don’t reason: group think

I’ve been enjoying the pushback against the idea that generative AI models have human-like smarts. While I agree that they shouldn’t be flying a plane or even driving an EV, I do think cognitive neuroscience has something to learn from the success of this technologyHere is a link to Friston et al. ‘s fun paper on the subject.

The main thing is that we (humans)can do this stuff using only 20 watts of electricity. Even inference on the latest AIs is vastly more costly.