How Will You Know You’ve Succeeded? A BRAIN story

August 2008: A summer day in Mountain View California. The previous year, In 2007, The Krasnow Institute for Advanced Study, which I was leading at George Mason University, had developed a proposal to invest tons of money in figuring out how mind emerges from brains and now I had to make the case that it deserved to be a centerpiece of a new administration’s science agenda. Three billion dollars is not a small ask, especially in the context of the 2008 financial crisis that was accelerating.

Before this moment, the project had evolved organically: a kickoff meeting at the Krasnow Institute near D.C., a joint manifesto published in Science Magazine, and then follow-on events in Des Moines, Berlin and Singapore to emphasize the broader aspects of such a large neuroscience collaboration. There even had been a radio interview with Oprah.

When I flew out to Google’s Mountain View headquarters in August 2008 for the SciFoo conference, I didn’t expect to be defending the future of neuroscience over lunch. But the individual who was running the science transition for the Obama Presidential Campaign, had summoned me for what he described as a “simple” conversation: defend our idea for investing $3 billion over the next decade in neuroscience with the audacious goal of explaining how “mind” emerges from “brains.” It was not the kind of meeting I was ready for.

I was nervous. As an institute director, I’d pitched for million-dollar checks. This was a whole new scale of fundraising for me. And though, California was my native state, I’d never gone beyond being a student body president out there. Google’s headquarters in summer of 2008 was an altar to Silicon Valley power.

SciFoo itself was still in its infancy then – the whole “unconference” concept felt radical and exciting, a fitting backdrop for pitching transformational science. But the Obama campaign wasn’t there for the unconventional meeting format. Google was a convenient meeting spot. And they wanted conventional answers.

I thought I made a compelling case: this investment could improve the lives of millions of patients with brain diseases. Neuroscience was on the verge of delivering cures. (I was wrong about that, but I believed it at the time.) The tools were ready. The knowledge was accumulating. We just needed the resources to put it all together.

Then I was asked the question that killed my pitch: “How will we know we have succeeded? What’s the equivalent of Kennedy’s moon landing – a clear milestone that tells us we’ve achieved what we set out to do?” You could see those astronauts come down the ladder of the lunar module. You could see that American flag on the moon. No such prospects with a large neuroscience initiative.

I had no answer.

I fumbled through some vague statements about understanding neural circuits and developing new therapies, but even as the words left my mouth, I knew they were inadequate. The moon landing worked as a political and scientific goal because it was binary: either we put a man on the moon or we didn’t. Either the flag was planted or it wasn’t.

But “explaining how mind emerges from brains”? When would we know we’d done that? What would success even look like?

The lunch ended politely. I flew back to DC convinced it had been an utter failure.

But that wasn’t the end of it. Five years later, at the beginning of Obama’s second presidential term, we began to hear news of a large initiative driven by the White House called the Brain Activity Map or BAM for short. The idea was to comprehensively map the functional activity of brains at high spatial and temporal resolution beyond that available at the time. It was like my original pitch both in scale (dollars) and in the notion that it was important to understand how mind emerges from brain function. The goal for the new BAM project was to be able to map between the activity and the brain’s emergent “mind”-like behavior, both in the healthy and pathological cases. But the BAM project trial balloon, even coming from the White House, was not an immediate slam dunk.

There was immediate push-back from large segments of the neuroscience community that felt excluded from BAM, but with a quick top-down recalibration from the White House Office of Science and Technology Policy and a whole of government approach that included multiple science agencies, BRAIN (Brain Research through Advancing Innovative Neurotechnologies) was born in April of 2013.

A year later, in April of 2014, I was approached to head Biological Sciences at the US National Science Foundation. When I took the job that October, I was leading a directorate with a budget of $750 million annually that supported research across the full spectrum of the life sciences – from molecular biology to ecosystems. I would also serve as NSF’s co-lead for the Obama Administration’s BRAIN Initiative—an acknowledgement of the failed pitch in Mountain View, I guess.

October 2014: sworn in and meeting with my senior management team–now here I was, a little more than a year into BRAIN. I had gotten what I’d asked for in Mountain View. Sort of. We had the funding, we had the talent, we had review panels evaluating hundreds of proposals. But I kept thinking about the question—the one I couldn’t answer then and still struggled with now. We had built this entire apparatus for funding transformational research, yet we were asking reviewers to apply the same criteria that would have rejected Einstein’s miracle year. How do you evaluate research when you can’t articulate clear success metrics? How do you fund work that challenges fundamental assumptions when your review criteria reward preliminary data and well-defined hypotheses?

Several months later, testifying before Congress about the BRAIN project, I remember fumbling again at the direct question of when we would deliver cures for dreaded brain diseases like ALS and Schizophrenia. I punted: that was an NIH problem (even though the original pitch had been about delivering revolutionary treatments. At NSF, we were about understanding the healthy brain. In fact, how could you ever understand brain disease without a deep comprehension of the non-pathological condition?

It was a reasonable bureaucratic answer. NIH does disease; NSF does basic science. Clean jurisdictional boundaries. But sitting there in that hearing room, I realized I was falling into the same trap that had seemingly doomed our pitch in 2008: on being asked for the delivery date of a clear criterion for success, I was waffling. Only this time, I was the agent for the funder: the American taxpayer.

The truth was uncomfortable. We had launched an initiative explicitly designed to support transformational research – research that would “show us how individual brain cells and complex neural circuits interact” in ways we couldn’t yet imagine. But when it came time to evaluate proposals, we fell back on the same criteria that favored incrementalism: preliminary data, clear hypotheses, established track records, well-defined deliverables. We were asking Einstein for preliminary data on special relativity.

And we weren’t unique. This was the system. This was how peer review worked across federal science funding. We had built an elaborate apparatus designed to be fair, objective, and accountable to Congress and taxpayers. What we had built was a machine that systematically filtered out the kind of work that might transform neuroscience.

All of this was years before the “neuroscience winter”—where massive scientific misconduct was unearthed in neurodegenerative disease research—which included Alzheimer’s. But the modus operandi of BRAIN foreshadowed it.

Starting in 2022, a series of investigations revealed that some of the most influential research on Alzheimer’s disease—work that had shaped the field for nearly two decades and guided billions in research funding—was built on fabricated data. Images had been manipulated. Results had been doctored. And this work had sailed through peer review at top journals, had been cited thousands of times, and had successfully competed for grant funding year after year. The amyloid hypothesis, which this fraudulent research had bolstered, had become scientific orthodoxy not because the evidence was overwhelming, but because it fit neatly into the kind of clear, well-defined research program that review panels knew how to evaluate.

Here was the other side of the Einstein problem that I’ve mentioned in previous posts. The same system that would have rejected Einstein’s 1905 papers for lack of preliminary data and institutional support had enthusiastically funded research that looked rigorous but was fabricated. Because the fraudulent work had all the elements that peer review rewards: clear hypotheses, preliminary data, incremental progress building on established findings, well-defined success metrics. It looked like good science. It checked all the boxes.

Meanwhile, genuinely transformational work—the kind that challenges fundamental assumptions, that crosses disciplinary boundaries, that can’t provide preliminary data because the questions are too new—struggles to get funded. Not because reviewers are incompetent or malicious, but because we’ve built a system that is literally optimized to make these mistakes. We’ve created an apparatus that rewards the appearance of rigor over actual discovery, that favors consensus over challenge, that funds incrementalism and filters out transformation.

So, what’s the real function of peer review? It’s supposed to be about identifying transformative research, but I don’t think that the real purpose. To my mind, the real purpose of peer review panels at NSF, the study sections at NIH, is to make inherently flawed funding decisions defensible—both to Congress and the American taxpayer. The criteria, intellectual merit, broader impacts at NSF, make awarding grant dollars auditable and fair seeming, not because they identify breakthrough work.

But honestly, there’s a real dilemma here: if you gave out NSF’s annual budget based on a program officer’s feeling that “this seems promising”, you’d face legitimate questions about cronyism, waste and arbitrary decision-making. The current system’s flaws aren’t bad policy accidents; they are the price we pay for other values we also care about.

So, did the BRAIN Initiative deliver on that pitch I made in Mountain View in 2008? Did we figure out how ‘mind’ emerges from ‘brains’? In retrospect, I remain super impressed by NSF’s  NeuroNex program: we got impressive technology – better ways to record from more neurons, new imaging techniques, sophisticated tools. We trained a generation of neuroscientists. But that foundational question – the one that made the political case, the one that justified the investment – we’re not meaningfully closer to answering it. We made incremental progress on questions we already knew how to ask. Which is exactly what peer review is designed to deliver. Oh, and one other thing that was produced: NIH’s parent agency, the Department of Health and Human Services,  got a trademark issued on the name of the initiative itself, BRAIN.

I spent four years as NSF’s co-lead on BRAIN trying to make transformational neuroscience happen within this system. I believed in it. I still believe in federal science funding. But I’ve stopped pretending the tension doesn’t exist. The very structure that makes BRAIN funding defensible to Congress made the transformational science we promised nearly impossible to deliver.

That failed pitch at Google’s headquarters in 2008. Turns out that the question was spot on we just never answered it.

“What Grant Reviewers Actually Look For (and What They Ignore)”

A close colleague of mine at a major US research university begins the process of preparing a grant proposal by creating something he calls a “storyboard”.  When I was growing up in LA, the concept of a storyboard was very familiar to me.  Many of my high school friends, at the time, aspired to careers in the locally dominant entertainment industry. The storyboard, invented by Walt Disney, used pictures to visualize a movie’s plot flow before production—often even before a screenplay was complete. In the LA movie business, you could look at a storyboard and pretty much get right away what a movie is about.

Back to the colleague of mine who uses storyboard to create grant proposals—his key idea is that you’re done making the storyboard, when someone outside the group can come in, look at it, and come away with a good understanding of what the grant is all about. If the storyboard is coherent, then it’s easy to make the proposal coherent as well. Further, the storyboard often gets reused in a modified fashion as the grant’s central graphic. Yes, a picture is worth several thousand words.

My colleague is onto something profound about how grant review works, across all funders, including those in the private sector.  But for this issue of Science Policy Insider, we’re going to consider the agency where I headed up Biological Sciences, the NSF. What about NIH, you may ask? A lot of the principles here go for both agencies. But here, we’re going to focus, laser-like, on the National Science Foundation, even as it undergoes drastic changes.

The Brutal Reality of NSF Panel Review

After sitting through too many grant panels at NSF, I can tell you this: most proposals get 15-20 minutes of discussion time in a panel that’s reviewing 30-50 proposals over three days. Your carefully crafted 15-page research plan? The primary reviewer read it thoroughly. The other two panelists skimmed it. Everyone else glanced at the summary.

This isn’t because reviewers are lazy. They’re exhausted, brilliant researchers who read proposals outside their immediate expertise, often late at night, while also worrying about their own grants, their trainees, and the paper referee statements they owe.

The storyboard approach works because it acknowledges this reality: reviewers are looking for a straightforward narrative they can grasp quickly and defend to the panel.

What Actually Happens in Review Panels

Here’s how it typically unfolds:

9:00 AM, Day Two of panel: The primary reviewer presents your proposal. They have 5 minutes to summarize your aims, approach, and why it matters. If they struggle to articulate your story coherently, you’re in trouble—not because your proposed science is bad, but because they can’t effectively advocate for you.

The secondary and tertiary reviewers add their perspectives. Then the panel discusses. The program officers watch for enthusiasm, coherence of the argument, and whether anyone is deeply opposed.

The proposals that succeed have champions—reviewers who “get it” immediately and can explain why it matters to others. The storyboard method facilitates championing reviewability.

What Reviewers Actually Look For

After watching this process play out thousands of times, here’s what I learned reviewers truly care about:

1. Can I explain this to the panel in 3 minutes?

If your research plan requires a flowchart to understand, the primary reviewer will simplify it—possibly incorrectly. Better to give them the simplified version yourself.

2. Is the question worth answering?

Not “is this interesting?” but “will anyone care about the answer?” Reviewers need to justify spending taxpayer money. Give them that justification explicitly.

3. Can this person actually do this?

No matter what is written down in the solicitation, preliminary data matters enormously, but not for the reason applicants think. It’s not about proving the hypothesis—it’s about proving you have the technical capability and haven’t missed an obvious problem.

4. Is this the right approach?

Reviewers are surprisingly forgiving about whether your specific hypothesis is correct. They’re much less forgiving about whether you’re using appropriate methods or have thought through alternatives.

5. Will this move the field forward?

Notice: not “revolutionize” or “transform”—just move forward. Incremental progress from a well-designed study beats a transformative idea with unclear methods. But doesn’t the call state that the proposed work should change the world? Sure, but from a practical standpoint, what counts for the reviewers is steady progress. And here’s the tricky part: while steady is key for the reviewers, transformative really is important for the program officers who make the penultimate decision. So, a balance is necessary.

What Reviewers Ignore (Even Though You Spent Weeks on It)

The extensive literature review: They skim it to see if you know the field. The 47 citations demonstrating your comprehensive knowledge? They checked that you cited the key papers and moved on.

Your detailed budget justification: Unless something looks wildly off, reviewers assume you know what your research costs. The line-by-line explanation of why you need that particular microscope? Skimmed.

Your publication list: They look at: Do you publish in good journals? Are you productive? Have you published on this topic before? That’s it. The distinction between your 47th and 52nd paper doesn’t matter.

The broader impacts section that you agonized over: I feel guilty about this because, I’ve often harped about broader impacts as a central criterion. Truth: most reviewers read this quickly to verify you addressed it competently. Unless it’s either exceptional or terrible, it rarely drives funding decisions. And these days, broader impacts means how the work will benefit all American citizens (think public health) or US National security.

The Elements That Actually Drive Decisions

Clarity of the research goals: Can the reviewer recite your three main questions without looking at the proposal? If not, rewrite.

Logical flow: Does each aim build on the previous one? Or are they three unrelated projects stapled together? Reviewers can tell.

Feasibility signals: Preliminary data, established collaborations, access to necessary resources, realistic timeline. These say, “this person will actually complete this work.”

Positioning: Is this filling a real gap, or are you slightly tweaking someone else’s approach? Reviewers want to fund work that moves us somewhere new, even if incrementally.

The writing quality: Clear, direct prose suggests clear thinking. Dense, jargon-heavy writing suggests unclear thinking (even if that’s unfair).

The Most Common Mistake

Applicants try to impress reviewers with complexity and comprehensiveness. They want to show they’ve thought of everything, considered every alternative, read every paper.

But reviewers are looking for clarity and confidence. They want to understand quickly what you’re proposing and why it matters. They want to feel confident you’ll succeed.

The storyboard method works because it forces simplicity. If you can’t draw a simple picture of your proposal that an outsider immediately understands, you don’t have a fundable story yet.

But Wait, There’s More

As hinted at above, at NSF, that panel review…. it’s strictly advisory. I’ve personally seen proposals with excellent reviews get declined and the reverse. The key decisional person? That’s the cognizant program officer for the solicitation. These days, there’s an additional vetting to look for alignment with the Administration’s political goals, but that’s a topic for a future newsletter.

What This Means for Your Proposal

Before you write a single word:

  • Can you explain your project in three sentences?
  • Can someone outside your subfield understand why it matters?
  • Do you have a clear narrative arc from question to approach to impact?

If not, you’re not ready to write. You’re ready to storyboard.

Build the simple, clear story first. Then elaborate carefully, making sure every detail serves that core narrative.

Reviewers are smart, busy people trying to identify good science under time pressure. Don’t make them work to understand your brilliance. Give them a story they can grasp, defend, and champion.

That’s what my colleague understood. And based on his funding success rate, the reviewers appreciate it.