A perfect blog entry for those of us who entered the computer era about the same time, here. As an addendum, that thesis format check was also a nightmare at Michigan. One of my fellow graduate students was reduced to printing out the same page of content multiple times in the middle of the night on our advisor’s printer so that he could pass the check before completing the content.
The Unsung Hero: Why Exploratory Science Deserves Equal Billing with Hypothesis-Driven Research

For decades, the scientific method taught in classrooms has followed a neat, linear path: observe, hypothesize, test, conclude. This hypothesis-driven approach has become so deeply embedded in our understanding of “real science” that research proposals without clear hypotheses often struggle to secure funding. Yet some of the most transformative discoveries in history emerged not from testing predictions, but from simply looking carefully at what nature had to show us.
It’s time we recognize exploratory science—sometimes called discovery science or descriptive science—as equally valuable to its hypothesis-testing counterpart.
What Makes Exploratory Science Different?
Hypothesis-driven science starts with a specific question and a predicted answer. You think protein X causes disease Y, so you design experiments to prove or disprove that relationship. It’s focused, efficient, and satisfyingly definitive when it works.
Exploratory science takes a different approach. It asks “what’s out there?” rather than “is this specific thing true?” Researchers might sequence every gene in an organism, catalog every species in an ecosystem, or map every neuron in a brain region. They’re generating data and looking for patterns without knowing exactly what they’ll find.
The Case for Exploration
The history of science is filled with examples where exploration led to revolutionary breakthroughs. One of my lab chiefs at NIH was Craig Venter, famous for his exploratory project: sequencing the human genome. The Human Genome Project didn’t test a hypothesis—it mapped our entire genetic code, creating a foundation for countless subsequent discoveries. Darwin’s theory of evolution emerged from years of cataloging specimens and observing patterns, not from testing a pre-formed hypothesis. The periodic table organized elements based on exploratory observations before anyone understood atomic structure.
More recently, large-scale exploratory efforts have transformed entire fields. The Sloan Digital Sky Survey mapped millions of galaxies, revealing unexpected structures in the universe. CRISPR technology was discovered through exploratory studies of bacterial immune systems, not because anyone was looking for a gene-editing tool. The explosive growth of machine learning has been fueled by massive exploratory datasets that revealed patterns no human could have hypothesized in advance.
Why Exploration Matters Now More Than Ever
We’re living in an era of unprecedented technological capability. We can sequence genomes for hundreds of dollars, image living brains in real time, and collect environmental data from every corner of the planet. These tools make exploration more powerful and more necessary than ever.
Exploratory science excels at revealing what we don’t know we don’t know. When you’re testing a hypothesis, you’re limited by your current understanding. You can only ask questions you’re smart enough to think of. Exploratory approaches let the data surprise you, pointing toward phenomena you never imagined.
This is particularly crucial in complex systems—ecosystems, brains, economies, climate—where interactions are so intricate that predicting specific outcomes is nearly impossible. In these domains, careful observation and pattern recognition often outperform narrow hypothesis testing.
The Complementary Relationship
None of this diminishes the importance of hypothesis-driven science. Testing specific predictions remains essential for establishing causation, validating mechanisms, and building reliable knowledge. The most powerful scientific progress often comes from the interplay between exploration and hypothesis testing.
Exploratory work generates observations and patterns that inspire hypotheses. Hypothesis testing validates or refutes these ideas, often raising new questions that require more exploration. It’s a virtuous cycle, not a competition.
Overcoming the Bias
Despite its value, exploratory science often faces skepticism. It’s sometimes dismissed as “fishing expeditions” or “stamp collecting”—mere data gathering without intellectual rigor. This bias shows up in grant reviews, promotion decisions, and journal publications.
This prejudice is both unfair and counterproductive. Good exploratory science requires tremendous rigor in experimental design, data quality, and analysis. It demands sophisticated statistical approaches to avoid false patterns and careful validation of findings. The difference isn’t in rigor but in starting point.
We need funding mechanisms that support high-quality exploratory work without forcing researchers to shoehorn discovery-oriented projects into hypothesis-testing frameworks. We need to train scientists who can move fluidly between both modes. And we need to celebrate exploratory breakthroughs with the same enthusiasm we reserve for hypothesis confirmation.
Looking Forward
As science tackles increasingly complex challenges—understanding consciousness, predicting climate change, curing cancer—we’ll need every tool in our methodological toolkit. Exploratory science helps us map unknown territory, revealing features of reality we didn’t know existed. Hypothesis-driven science helps us understand the mechanisms behind what we’ve discovered.
Both approaches are essential. Both require creativity, rigor, and insight. And both deserve recognition as legitimate, valuable paths to understanding our world.
The next time you hear about a massive dataset, a comprehensive catalog, or a systematic survey, don’t dismiss it as “just descriptive.” Remember that today’s exploration creates the foundation for tomorrow’s breakthroughs. In science, as in geography, you can’t know where you’re going until you know where you are.
Cellular Digital Twins…

I’ve been intrigued with this technology for some time from the standpoint of cell biology. When a healthy cell undergoes cancer transformation or metastasis, we are looking at a phase shift type of change where massive complexity comes into play. Cellular digital twins that incorporate the vast amount of data from technologies such as RNA-seq, multiphoton imaging, and proteomics can now be quite high-fidelity. Simulating such disease-related phenotypic changes may be incredibly useful for providing insights into the cell as a complex adaptive system, while also generating hypotheses for future experiments.
Beyond digital twins as they currently exist is the idea of AI world models, where the worlds are individual cells or a cell network. In that case, I could imagine a cell biologist using natural language to create an experimental initial condition and then simulate the time evolution of the world as an in silico experiment — how cool!
Although, as with all digital twins, we need to experimentally test in the real world. Trust, but verify.
How America Built Its Science Foundation Before the War Changed Everything

Most people think America’s scientific dominance began with the Manhattan Project or the space race. That’s not wrong, but it misses the real story. By the time World War II arrived, we’d already spent decades quietly building the infrastructure that would make those massive wartime projects possible.
The foundation was laid much earlier, and in ways that might surprise you. What’s more surprising is how close that foundation came to crumbling—and what we nearly lost along the way.
The Land-Grant Revolution
The story really starts in 1862 with the Morrill Act—arguably the most important piece of science policy legislation most Americans have never heard of. While the Civil War was tearing the country apart, Congress was simultaneously creating a network of universities designed to teach “agriculture and the mechanic arts.”
This wasn’t just about farming. The land-grant universities were America’s first systematic attempt to connect higher education with practical problem-solving. Schools like Cornell, Penn State, and the University of California weren’t just teaching Latin and philosophy—they were training engineers, studying crop diseases, and developing new manufacturing techniques.
But here’s what’s remarkable: this almost didn’t happen. The 1857 version of Morrill’s bill faced heavy opposition from Southern legislators who viewed it as federal overreach and Western states who objected to the population-based allocation formula. It passed both houses by narrow margins, only to be vetoed by President Buchanan. The legislation succeeded in 1862 primarily because Southern opponents had left Congress to join the Confederacy.
Private Money Fills a Critical Gap
What’s fascinating—and telling—is how much of early American scientific investment came from private philanthropy rather than government funding. The industrial fortunes of the late 1800s flowed into research, but this created a system entirely dependent on individual wealth and personal interest.
The Carnegie Institution of Washington, established in 1902, essentially functioned as America’s first NSF decades before the actual NSF existed. Andrew Carnegie’s $10 million endowment was enormous—equal to Harvard’s entire endowment and vastly more than what all American universities spent on basic research combined. The Rockefeller Foundation transformed medical education and research on a similar scale.
But imagine if Carnegie had been less interested in science, or if the robber baron fortunes had flowed entirely into art collections and European estates instead. This mixed ecosystem worked, but it was inherently unstable. When economic conditions tightened, private funding could vanish. When wealthy patrons died, research priorities shifted with their successors’ interests.
Corporate Labs: Innovation with Built-In Vulnerabilities
By the 1920s, major corporations were establishing research laboratories. General Electric’s lab, founded in 1900 as the first industrial research facility in America, became the model. Bell Labs, created in 1925 through the consolidation of AT&T and Western Electric research, would later become legendary for discoveries that shaped the modern world.
These corporate labs solved an important problem, bridging the gap between scientific discovery and commercial application. But they also created troubling dependencies. Research priorities followed profit potential, not necessarily national needs. Breakthrough discoveries in fundamental physics might be abandoned if they didn’t promise immediate commercial returns.
More concerning, these labs were vulnerable to economic cycles. During the Great Depression, even well-established research programs faced significant budget cuts and staffing reductions.
Government Stays Reluctantly on the Sidelines
Through all of this, the federal government remained a hesitant, minor player. The National Institute of Health, created in 1930 with a modest $750,000 for building construction, was one of the few exceptions—and even then, the federal government rarely funded medical research outside its own laboratories before 1938.
Most university science departments survived on whatever they could patch together from donors, industry partnerships, and minimal federal grants. The system worked, but precariously. During the Depression, university budgets were slashed, enrollment dropped, and research programs had to be scaled back or eliminated. The National Academy of Sciences saw its operating and maintenance funds drop by more than 15 percent each year during the early 1930s.
The Foundation That Held—Barely
By 1940, America had assembled what looked like a robust scientific infrastructure, but it was actually a precarious arrangement held together by fortunate timing and individual initiative. Strong universities teaching practical skills, generous private funding that could shift with economic conditions, corporate labs vulnerable to business cycles, and minimal federal involvement.
When the war suddenly demanded massive scientific mobilization, the infrastructure held together long enough to support the Manhattan Project, radar development, and other crucial innovations. But it was a closer thing than most people realize. The Depression had already demonstrated the system’s vulnerabilities—funding cuts, program reductions, and the constant uncertainty that came with depending on private largesse.
What We Nearly Lost
Looking back, what’s remarkable isn’t just how much America invested in science before 1940, but how easily much of it could have been lost to economic downturns, shifting private interests, or political opposition. That decentralized mix of public and private initiatives created innovation capacity, but it also created significant vulnerabilities.
The war didn’t just expand American science—it revealed how unstable our previous funding system had been and demonstrated what sustained, coordinated investment could accomplish. The scientific breakthroughs that defined the next half-century emerged not from the patchwork system of the 1930s, but from the sustained federal commitment that followed.
Today’s scientific leadership isn’t an accident of American ingenuity. It’s the direct result of lessons learned from a system that worked despite its fragility—and the decision to build something more reliable in its place. The question is whether we remember why that change was necessary, and what we might lose if we return to depending on unstable, decentralized funding for our most critical research needs.
Transformer models don’t reason: group think
I’ve been enjoying the pushback against the idea that generative AI models have human-like smarts. While I agree that they shouldn’t be flying a plane or even driving an EV, I do think cognitive neuroscience has something to learn from the success of this technology. Here is a link to Friston et al. ‘s fun paper on the subject.
The main thing is that we (humans)can do this stuff using only 20 watts of electricity. Even inference on the latest AIs is vastly more costly.
Post lunch conversation with a colleague: trust in science

Yesterday, I had lunch with a colleague at a favorite BBQ spot in Arlington. Both of us work in science communication, so naturally our conversation drifted to the question that’s been nagging at many of us: why has public trust in scientific institutions declined in recent years? By the time we finished our, actually healthy food, we’d both come to the same conclusion—the current way scientists communicate with the public might be contributing to the problem.
From vaccine hesitancy to questions about research reliability, the relationship between science and society has grown more complex. To understand this dynamic, we need to examine not only what people think about science but also how different cultures approach the validation of knowledge itself.
Harvard scholar Sheila Jasanoff offers valuable insights through her concept of “civic epistemologies”—the cultural practices societies use to test and apply knowledge in public decision-making. These practices vary significantly across nations and help explain why scientific controversies unfold differently in different places.
American Approaches to Knowledge Validation
Jasanoff’s research identifies distinctive features of how Americans evaluate scientific claims:
Public Challenge: Americans tend to trust knowledge that has withstood open debate and questioning. This reflects legal traditions where competing arguments help reveal the truth.
Community Voice: There’s a strong expectation that affected groups should participate in discussions about scientific evidence that impacts them, particularly in policy contexts.
Open Access: Citizens expect transparency in how conclusions are reached, including access to underlying data and reasoning processes.
Multiple Perspectives: Rather than relying on single authoritative sources, Americans prefer hearing from various independent institutions and experts.
How This Shapes Science Communication
These cultural expectations help explain some recent communication challenges. When public health recommendations changed during the COVID-19 pandemic, this appeared to violate expectations for thorough prior testing of ideas. Similarly, when social platforms restricted specific discussions, this conflicted with preferences for open debate over gatekeeping.
In scientific fields like neuroscience, these dynamics have actually driven positive reforms. When research reliability issues emerged, the American response emphasized transparency solutions: open data sharing, study preregistration, and public peer review platforms. Major funding agencies now require data management plans that promote accountability.
Interestingly, other countries have addressed similar scientific quality concerns in different ways. European approaches have relied more on institutional reforms and expert committees, while American solutions have emphasized broader participation and transparent processes.
Digital Platforms and Knowledge
Online platforms have both satisfied and complicated American expectations. They provide the transparency and diverse voices people want, but the sheer volume of information makes careful evaluation difficult. Platforms like PubPeer enable post-publication scientific review that aligns with cultural preferences for ongoing scrutiny; however, the same openness can also amplify misleading information.
Building Better Science Communication
Understanding these cultural patterns suggests more effective approaches:
Acknowledge Uncertainty: Present science as an evolving process rather than a collection of final answers. This matches realistic expectations about how knowledge develops.
Create Meaningful Participation: Include affected communities in research priority-setting and policy discussions, following successful models in patient advocacy and environmental research.
Increase Transparency: Share reasoning processes and data openly. Open science practices align well with cultural expectations for accountability.
Recognize Broader Concerns: Understand that skepticism often reflects deeper questions about who participates in knowledge creation and whose interests are served.
Moving Forward
Public skepticism toward science isn’t simply a matter of misunderstanding—it often reflects tensions between scientific institutions and cultural expectations about legitimate authority. Rather than dismissing these expectations, we might develop communication approaches that honor both scientific rigor and democratic values.
The goal isn’t eliminating all skepticism, which serves essential functions in healthy societies. Instead, it channels critical thinking in ways that strengthen our collective ability to address complex challenges that require scientific insight.
Blogpost for Elgar on my new book…
Is here. Enjoy.
Fund the person or the project?

Of course, that’s the shorthand for the policy debate that has been ongoing for years in the science funding world. Should we fund top-notch scientists (subject to some sort of regular post hoc review) and trust them to come up with the ideas? Alternatively, should we fund each project as an idea, while still taking into account the quality of the investigator? And how do we decide (fund or decline) anyway?
The answer to this question is an excellent example of the need for more evidence-based policy making. There are natural experiments out there in the US funding world: NIH’s R35 grants are largely person-based. Howard Hughes Funding (especially in the case of its extramural program) is another example. And of course, on the other side is the long track record from NIH RO1s and NSF’s standard grants. However, I’ve never seen the results of those experiments — despite my former government position, which would have given me access to them. So here’s a ‘bleg’ as my colleague and friend Tyler would use the word: does anyone have the evidence? Please send it along, and I’ll blog about it.
Zero-based budgeting experiment: US STEM

At research universities, zero-based budgeting is pretty rare. It means starting from zero expenditures and then justifying each budget line to reach an annual budget. It is frowned upon for long-term R&D projects for the apparent reason that it’s pretty challenging to predict a discovery that could be exploited to produce a measurable outcome.
Nevertheless, it’s worth considering using the process to optimize the entire US STEM/Biomedical enterprise from scratch.
Why Research Resists Zero-Based Budgeting
The resistance to zero-based budgeting in research environments stems from legitimate concerns. Academic institutions seldom adhere to a zero-based budget model because, as I stated above, scientific discovery is inherently unpredictable, and zero-based budgets require a significant amount of time and labor from units and university administrators to prepare, and this model can seriously encumber long-term planning.
Research requires substantial upfront investments in equipment, facilities, and human capital that only pay dividends over extended periods. The peer review system, while imperfect, has evolved as a way to allocate resources based on scientific merit rather than easily quantifiable metrics.
The Case for a National Reset
Despite these concerns, there’s a compelling argument for applying zero-based budgeting principles to the broader American STEM enterprise. Not at the individual project level, but at the systemic level—questioning fundamental assumptions about how we organize, fund, and conduct research.
Addressing Systemic Inefficiencies
Our current research ecosystem has evolved organically over decades, creating layers of bureaucracy, redundant administrative structures, and misaligned incentives. Universities compete for the same federal funding while maintaining parallel administrative infrastructures. A zero-based approach would force examination of whether these patterns serve our ultimate goals of scientific progress and national competitiveness.
Responding to Global Competition
The US still retains a healthy lead, spending $806 billion on R&D, both public and private, in 2021, but China is rapidly closing the gap. The Chinese government recently announced a massive $52 billion investment in research and development for 2024 — a 10% surge over the previous year, while the U.S. cut total investment in research and development for fiscal 2024 by 2.7%.
China had significantly increased its R&D investment, contributing over 24 percent of total global funding according to data from the Congressional Research Service, while the U.S. total remains strong, CRS data show that its share of total global expenditure dropped to just under 31 percent in 2020, down from nearly 40 percent in 2000.
Realigning with National Priorities
AI, pandemic preparedness, cybersecurity, and advanced manufacturing require coordinated, interdisciplinary approaches that don’t always fit neatly into existing departmental structures or funding categories. Starting from zero would allow us to design funding mechanisms that better align with strategic priorities while preserving fundamental research.
A Practical Framework
Implementing zero-based budgeting for the STEM enterprise could be approached systematically:
Phase 1: Comprehensive Mapping Begin by mapping the current research ecosystem—funding flows, personnel, infrastructure, outputs, and outcomes. This alone would be valuable, as we currently lack a complete picture of resource allocation.
Phase 2: Goal Setting Involve stakeholders in defining desired outcomes. What should American STEM research accomplish in the next 10-20 years? How do we balance basic research with applied research?
Phase 3: Pilot Implementation Rather than overhauling everything at once, implement zero-based approaches in specific domains or regions to identify what works while minimizing disruption.
Potential Benefits and Risks
A thoughtful application could yield improved efficiency by eliminating redundant processes, better alignment with national priorities, enhanced collaboration across institutional silos, and increased agility to respond to emerging threats.
However, any major reform involves significant risks. There’s danger of disrupting productive research programs, alienating talented researchers, or creating unintended bureaucratic complications. The political and logistical challenges would be immense.
Moreover, China has now surpassed the US in “STEM talent production, research publications, patents, and knowledge-and technology-intensive manufacturing”, suggesting that while spending matters, other factors are equally important.
Preserving What Works
Zero-based budgeting shouldn’t mean discarding what has made American research successful. The peer review system has generally identified quality research. The tradition of investigator-initiated research has fostered creativity and serendipitous discoveries. The partnership between universities, government, and industry has created a dynamic innovation ecosystem.
The goal isn’t elimination but examination of whether these elements are being implemented most effectively.
Conclusion
The idea of applying zero-based budgeting to American STEM research deserves serious consideration. By questioning assumptions, eliminating inefficiencies, and realigning priorities, we can create a research enterprise better positioned to tackle 21st-century challenges.
The process itself—careful examination of how we conduct and fund research—could be as valuable as specific reforms. In an era when Based on current enrollment patterns, China is projected to produce more than 77,000 STEM PhD graduates per year compared to approximately 40,000 in the United States by 2025, representing nearly double the US output., the ability to thoughtfully reimagine our institutions may be our greatest asset.
The question isn’t whether we can afford to undertake such a comprehensive review. The question is whether we can afford not to.
Why I’m rereading Moby Dick

One thing I’ve noticed about my years working in the policy arena here in Washington, D.C., is that I mainly read nonfiction. I think that’s unfortunate, because a great novel allows one to peer into an alternate universe in a way where only the structure of the prose constrains the world of the book. For each reader of a novel, that created universe is unique. Probably the same is true for each reread of a great story, even by the same reader.
Moby-Dick takes place in the Whaling World of the 19th century, which was centered in New England, specifically Nantucket, a small island off the coast of Massachusetts. The novel is deeply symbolic, as we might recall from our school days past, but the main characters are a malevolent sperm whale and a crazed, one-legged whaler captain obsessed with revenge for his lost limb. The action takes place on the vast oceans and is witnessed by a narrator, Ishmael, who might be every American, at least of the De Tocqueville era in which the book was written.
I once lived on Nantucket and, for many years after, spent time in Woods Hole, 25 ocean miles away on the mainland, where I cut my teeth as a working scientist. So the world of Moby Dick is one I can relate to.
But more interestingly, to me, the stuff of the fictional universe, with its catastrophic battle between man and leviathan, leavened by rich human-to-human relationships, is what is much richer the second time around, after four decades.