Post lunch conversation with a colleague: trust in science

Yesterday, I had lunch with a colleague at a favorite BBQ spot in Arlington. Both of us work in science communication, so naturally our conversation drifted to the question that’s been nagging at many of us: why has public trust in scientific institutions declined in recent years? By the time we finished our, actually healthy food, we’d both come to the same conclusion—the current way scientists communicate with the public might be contributing to the problem.

From vaccine hesitancy to questions about research reliability, the relationship between science and society has grown more complex. To understand this dynamic, we need to examine not only what people think about science but also how different cultures approach the validation of knowledge itself.

Harvard scholar Sheila Jasanoff offers valuable insights through her concept of “civic epistemologies”—the cultural practices societies use to test and apply knowledge in public decision-making. These practices vary significantly across nations and help explain why scientific controversies unfold differently in different places.

American Approaches to Knowledge Validation

Jasanoff’s research identifies distinctive features of how Americans evaluate scientific claims:

Public Challenge: Americans tend to trust knowledge that has withstood open debate and questioning. This reflects legal traditions where competing arguments help reveal the truth.

Community Voice: There’s a strong expectation that affected groups should participate in discussions about scientific evidence that impacts them, particularly in policy contexts.

Open Access: Citizens expect transparency in how conclusions are reached, including access to underlying data and reasoning processes.

Multiple Perspectives: Rather than relying on single authoritative sources, Americans prefer hearing from various independent institutions and experts.

How This Shapes Science Communication

These cultural expectations help explain some recent communication challenges. When public health recommendations changed during the COVID-19 pandemic, this appeared to violate expectations for thorough prior testing of ideas. Similarly, when social platforms restricted specific discussions, this conflicted with preferences for open debate over gatekeeping.

In scientific fields like neuroscience, these dynamics have actually driven positive reforms. When research reliability issues emerged, the American response emphasized transparency solutions: open data sharing, study preregistration, and public peer review platforms. Major funding agencies now require data management plans that promote accountability.

Interestingly, other countries have addressed similar scientific quality concerns in different ways. European approaches have relied more on institutional reforms and expert committees, while American solutions have emphasized broader participation and transparent processes.

Digital Platforms and Knowledge

Online platforms have both satisfied and complicated American expectations. They provide the transparency and diverse voices people want, but the sheer volume of information makes careful evaluation difficult. Platforms like PubPeer enable post-publication scientific review that aligns with cultural preferences for ongoing scrutiny; however, the same openness can also amplify misleading information.

Building Better Science Communication

Understanding these cultural patterns suggests more effective approaches:

Acknowledge Uncertainty: Present science as an evolving process rather than a collection of final answers. This matches realistic expectations about how knowledge develops.

Create Meaningful Participation: Include affected communities in research priority-setting and policy discussions, following successful models in patient advocacy and environmental research.

Increase Transparency: Share reasoning processes and data openly. Open science practices align well with cultural expectations for accountability.

Recognize Broader Concerns: Understand that skepticism often reflects deeper questions about who participates in knowledge creation and whose interests are served.

Moving Forward

Public skepticism toward science isn’t simply a matter of misunderstanding—it often reflects tensions between scientific institutions and cultural expectations about legitimate authority. Rather than dismissing these expectations, we might develop communication approaches that honor both scientific rigor and democratic values.

The goal isn’t eliminating all skepticism, which serves essential functions in healthy societies. Instead, it channels critical thinking in ways that strengthen our collective ability to address complex challenges that require scientific insight.

Fund the person or the project?

Of course, that’s the shorthand for the policy debate that has been ongoing for years in the science funding world. Should we fund top-notch scientists (subject to some sort of regular post hoc review) and trust them to come up with the ideas? Alternatively, should we fund each project as an idea, while still taking into account the quality of the investigator? And how do we decide (fund or decline) anyway?

The answer to this question is an excellent example of the need for more evidence-based policy making. There are natural experiments out there in the US funding world: NIH’s R35 grants are largely person-based. Howard Hughes Funding (especially in the case of its extramural program) is another example. And of course, on the other side is the long track record from NIH RO1s and NSF’s standard grants. However, I’ve never seen the results of those experiments — despite my former government position, which would have given me access to them. So here’s a ‘bleg’ as my colleague and friend Tyler would use the word: does anyone have the evidence? Please send it along, and I’ll blog about it.

Zero-based budgeting experiment: US STEM

Photo by Pixabay on Pexels.com

At research universities, zero-based budgeting is pretty rare. It means starting from zero expenditures and then justifying each budget line to reach an annual budget. It is frowned upon for long-term R&D projects for the apparent reason that it’s pretty challenging to predict a discovery that could be exploited to produce a measurable outcome.

Nevertheless, it’s worth considering using the process to optimize the entire US STEM/Biomedical enterprise from scratch.

Why Research Resists Zero-Based Budgeting

The resistance to zero-based budgeting in research environments stems from legitimate concerns. Academic institutions seldom adhere to a zero-based budget model because, as I stated above, scientific discovery is inherently unpredictable, and zero-based budgets require a significant amount of time and labor from units and university administrators to prepare, and this model can seriously encumber long-term planning.

Research requires substantial upfront investments in equipment, facilities, and human capital that only pay dividends over extended periods. The peer review system, while imperfect, has evolved as a way to allocate resources based on scientific merit rather than easily quantifiable metrics.

The Case for a National Reset

Despite these concerns, there’s a compelling argument for applying zero-based budgeting principles to the broader American STEM enterprise. Not at the individual project level, but at the systemic level—questioning fundamental assumptions about how we organize, fund, and conduct research.

Addressing Systemic Inefficiencies

Our current research ecosystem has evolved organically over decades, creating layers of bureaucracy, redundant administrative structures, and misaligned incentives. Universities compete for the same federal funding while maintaining parallel administrative infrastructures. A zero-based approach would force examination of whether these patterns serve our ultimate goals of scientific progress and national competitiveness.

Responding to Global Competition

The US still retains a healthy lead, spending $806 billion on R&D, both public and private, in 2021, but China is rapidly closing the gap. The Chinese government recently announced a massive $52 billion investment in research and development for 2024 — a 10% surge over the previous year, while the U.S. cut total investment in research and development for fiscal 2024 by 2.7%.

China had significantly increased its R&D investment, contributing over 24 percent of total global funding according to data from the Congressional Research Service, while the U.S. total remains strong, CRS data show that its share of total global expenditure dropped to just under 31 percent in 2020, down from nearly 40 percent in 2000.

Realigning with National Priorities

AI, pandemic preparedness, cybersecurity, and advanced manufacturing require coordinated, interdisciplinary approaches that don’t always fit neatly into existing departmental structures or funding categories. Starting from zero would allow us to design funding mechanisms that better align with strategic priorities while preserving fundamental research.

A Practical Framework

Implementing zero-based budgeting for the STEM enterprise could be approached systematically:

Phase 1: Comprehensive Mapping Begin by mapping the current research ecosystem—funding flows, personnel, infrastructure, outputs, and outcomes. This alone would be valuable, as we currently lack a complete picture of resource allocation.

Phase 2: Goal Setting Involve stakeholders in defining desired outcomes. What should American STEM research accomplish in the next 10-20 years? How do we balance basic research with applied research?

Phase 3: Pilot Implementation Rather than overhauling everything at once, implement zero-based approaches in specific domains or regions to identify what works while minimizing disruption.

Potential Benefits and Risks

A thoughtful application could yield improved efficiency by eliminating redundant processes, better alignment with national priorities, enhanced collaboration across institutional silos, and increased agility to respond to emerging threats.

However, any major reform involves significant risks. There’s danger of disrupting productive research programs, alienating talented researchers, or creating unintended bureaucratic complications. The political and logistical challenges would be immense.

Moreover, China has now surpassed the US in “STEM talent production, research publications, patents, and knowledge-and technology-intensive manufacturing”, suggesting that while spending matters, other factors are equally important.

Preserving What Works

Zero-based budgeting shouldn’t mean discarding what has made American research successful. The peer review system has generally identified quality research. The tradition of investigator-initiated research has fostered creativity and serendipitous discoveries. The partnership between universities, government, and industry has created a dynamic innovation ecosystem.

The goal isn’t elimination but examination of whether these elements are being implemented most effectively.

Conclusion

The idea of applying zero-based budgeting to American STEM research deserves serious consideration. By questioning assumptions, eliminating inefficiencies, and realigning priorities, we can create a research enterprise better positioned to tackle 21st-century challenges.

The process itself—careful examination of how we conduct and fund research—could be as valuable as specific reforms. In an era when Based on current enrollment patterns, China is projected to produce more than 77,000 STEM PhD graduates per year compared to approximately 40,000 in the United States by 2025, representing nearly double the US output., the ability to thoughtfully reimagine our institutions may be our greatest asset.

The question isn’t whether we can afford to undertake such a comprehensive review. The question is whether we can afford not to.

Why I’m rereading Moby Dick

Photo by Emma Li on Pexels.com

One thing I’ve noticed about my years working in the policy arena here in Washington, D.C., is that I mainly read nonfiction. I think that’s unfortunate, because a great novel allows one to peer into an alternate universe in a way where only the structure of the prose constrains the world of the book. For each reader of a novel, that created universe is unique. Probably the same is true for each reread of a great story, even by the same reader.

Moby-Dick takes place in the Whaling World of the 19th century, which was centered in New England, specifically Nantucket, a small island off the coast of Massachusetts. The novel is deeply symbolic, as we might recall from our school days past, but the main characters are a malevolent sperm whale and a crazed, one-legged whaler captain obsessed with revenge for his lost limb. The action takes place on the vast oceans and is witnessed by a narrator, Ishmael, who might be every American, at least of the De Tocqueville era in which the book was written.

I once lived on Nantucket and, for many years after, spent time in Woods Hole, 25 ocean miles away on the mainland, where I cut my teeth as a working scientist. So the world of Moby Dick is one I can relate to.

But more interestingly, to me, the stuff of the fictional universe, with its catastrophic battle between man and leviathan, leavened by rich human-to-human relationships, is what is much richer the second time around, after four decades.

Happy Labor Day…

Photo by Karen Longwell on Pexels.com

A day to honor those who work for a living and to mark the end of meterological summer.

Even though our semester begins in late August, for me, this day marks the symbolic start of the academic year, with all its potential and promise. This term, my undergraduate class surveys Vannevar Bush’s vision in Science: The Endless Frontier — how federal investment in basic research could jump-start both the US economy and public health. My graduate class, as always, focuses on managing significant US government crises. Both classes are at their respective enrollment targets, so I’m really pleased.

The research agenda continues with NSF’s Sage Grande AI Testbed and the just-blogged new book project on commercial aviation, where I’m currently revising a book proposal and completing a sample chapter on flight envelope protection philosophies.

And I’m rereading Moby Dick! It’s been many decades since I first dashed my way through the pages under the deadline of my English professor at Amherst College. This time, I’m hoping that going slowly will give me a better appreciation for Melville.

The new project: commercial aviation and culture

I’m embarking on what may be the most intellectually stimulating research project I’ve ever undertaken: a new book that explores the fascinating divergence between European, American, and Brazilian approaches to aviation technology. The initial insight came from studying supersonic transport development—Europe succeeded with Concorde while America’s programs were cancelled despite technical success, and Brazil’s Embraer took yet another path entirely, building a global powerhouse by focusing on regional jets. What started as curiosity about why aviation developed so differently across continents has evolved into a comprehensive examination of how culture, politics, and history shape our technological choices in the skies.

One of the most compelling discoveries has been tracing “systemic safety” approaches in aviation back to 19th-century Continental Europe, where systematic, preventative frameworks emerged that still influence modern European aviation through EASA standards and Airbus consortium models. American approaches emphasize market solutions and competitive development. At the same time, Brazil’s Embraer represents a fascinating hybrid—originating as a government-sponsored entity in the 1970s but evolving into a nimble, market-focused competitor that combines elements of both European industrial policy and American entrepreneurial agility. Every case study reveals cultural DNA embedded in technological choices, from NASA’s technically successful but commercially abandoned High-Speed Research Program to Europe’s patient consortium-based development philosophy.

The detective work energizes me most—tracing aviation ideas across centuries, connecting aircraft design choices to deeper cultural patterns, and discovering how 19th-century Prussian technical standards influenced modern airworthiness certification. I’m spending months in aviation archives, interviewing aerospace engineers, visiting manufacturers across Europe and the Americas, and exploring the organizational cultures of institutions ranging from NASA and Boeing to startups like JetZero and Boom Supersonic. Understanding these different approaches isn’t merely academic curiosity; it’s essential for navigating challenges such as sustainable aviation, electric aircraft, and the revival of supersonic flight through entirely new players.

This isn’t about declaring winners—each approach has produced remarkable innovations from the Wright Brothers to Concorde, from the 747 to Embraer’s revolutionary regional jets, and now to startups like JetZero’s radical blended wing designs and Boom’s quest to bring back supersonic passenger flight. Instead, it’s about understanding how culture shapes aviation technology in ways often invisible until we step back and see the bigger picture. The story of aviation’s divergent development turns out to be about democracy, capitalism, geography, and human values—all expressed through our concrete choices about how we design and deploy our flying machines, from legacy manufacturers to Silicon Valley upstarts. It’s an adventure in ideas, and I can’t wait to share what I discover.

Bold Ventures in Science: NSF’s NEON and NIH’s BRAIN Initiative

My favorite projects…

As loyal readers know, these are my two favorite science initiatives. They stand out as beacons of progress: the National Science Foundation’s National Ecological Observatory Network (NEON) and the National Institutes of Health’s Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. These groundbreaking endeavors showcase the commitment of U.S. science agencies to tackling complex, large-scale challenges that could revolutionize our understanding of the world around us and within us.

NSF’s NEON: A Continental-Scale View of Ecology

Imagine having a window into the ecological processes of an entire continent. That’s precisely what NEON aims to provide. Initiated in 2011, this audacious project is creating a network of ecological observatories spanning the United States, including Alaska, Hawaii, and Puerto Rico.

Yes, NEON has faced its share of challenges. The project’s timeline and budget have been adjusted since its inception, growing from an initial estimate of $434 million to around $469 million, with completion delayed from 2016 to 2019. But let’s be honest: when did you last try to build a comprehensive ecological monitoring system covering an entire continent? These adjustments reflected the project’s complexity and the learning curve in such a pioneering endeavor.

The payoff? NEON is now collecting standardized ecological data across 81 field sites from Hawaii to Puerto Rico and in between. This massive time series in some 200 dimensions will allow scientists to analyze and forecast ecological changes over decades. From tracking the impacts of climate change to understanding biodiversity shifts, NEON provides invaluable insights that could shape environmental policy and conservation efforts for future generations.

NIH’s BRAIN Initiative: Decoding Our Most Complex Organ

Meanwhile, the NIH’s BRAIN Initiative is taking on an equally monumental task: mapping the human brain. Launched in 2013, this project is aptly named, as it requires a lot of brains to understand… well, brains.

With annual funding that has grown from an initial $100 million to over $500 million, the BRAIN Initiative is a testament to the NIH’s commitment to unraveling the mysteries of neuroscience. Mapping all 86 billion neurons in the human brain by 2026 might seem a tad optimistic. But I’m increasingly impressed with our progress, and I am hopeful we’ll be able to get some meaningful statistics about variability across individuals.

The initiative has already led to the development of new technologies for studying brain activity, potential treatments for conditions like Parkinson’s disease, and insights into how our brains process information. It’s like a real-life adventure into the final frontier, except instead of outer space, we’re exploring the inner space of our skulls.

The Challenges: More Feature Than Bug

Both NEON and the BRAIN Initiative have faced obstacles, from budget adjustments to timeline extensions. But in the world of cutting-edge science, these challenges are often where the real learning happens. They’ve pushed scientists to innovate, collaborate, and think outside the box (or skull, in the case of BRAIN).

These projects have also created unique opportunities for researchers to develop new skills. Grant writing for these initiatives isn’t just an administrative hurdle; it’s a chance to think big and connect individual research to grand, overarching goals. It’s turning scientists into visionaries, and isn’t that worth a few late nights and extra cups of coffee?

Conclusion: Big Science, Bigger Possibilities

NEON and the BRAIN Initiative represent more than just large-scale scientific projects. They’re bold statements about the value of basic research and the importance of tackling complex, long-term challenges. They remind us that some questions are too big for any single lab or institution to answer alone.

As these projects evolve and produce data, they’re not just advancing our understanding of ecology and neuroscience. They’re also creating models for conducting science at a grand scale, paving the way for future ambitious endeavors.

So here’s to the scientists, administrators, and visionaries behind NEON and the BRAIN Initiative. They’re showing us that with enough creativity, persistence, and, yes, funding, we can tackle some of the biggest questions in science. And who knows? The next breakthrough in saving our planet or understanding consciousness could be hidden in the data they’re collecting right now.