Happy New Year! Below is Science Policy Insider’s first posting of 2026:
We were reviewing a proposal that included gorgeous preliminary data and confocal microscopy images from what was, at the time, cutting-edge: two-channel laser-scanning technology. Because the images were both crisply in focus and colored in green and red to reflect the locations of different sub-cellular fluorescent molecular probes, it felt as if this was an extraordinary grant proposal, based on the images themselves, never mind the fact that there was no working hypothesis nor technical consideration of a phenomenon called autofluorescence, where the biomolecules of the cell itself produce their own signal that can be confused with the signals coming from the two probes.
Thanks for reading sciencepolicyinsider! Subscribe for free to receive new posts and support my work.
The panel discussion revealed the problem. Some reviewers were ready to fund based solely on the images. Others raised the autofluorescence issue, the missing hypothesis. But even the skeptics prefaced their concerns with “The data are beautiful, but…” Those pictures had done their job—they made weak science look compelling.
That’s when I learned: awesome preliminary data can cloud objectivity. After reviewing thousands of grants at NIH and NSF over three decades, I’ve seen it happen repeatedly.
So, as you plan your 2026 submissions, here’s what I wish I’d known from the start—lessons that might save you the same learning curve.
Lesson 1: Clarity Beats Cleverness
In my early days, I thought impressive vocabulary and complex sentences demonstrated sophistication. Surely reviewers would appreciate nuanced, academic writing that showcased the full complexity of my thinking. I was wrong.
Clarity wins every time. Reviewers are overwhelmed, often reading up to 15 proposals a week while managing their own labs, teaching loads, and grant deadlines. Simple, direct writing isn’t dumbing down your science—it’s respecting your reviewers’ cognitive bandwidth and making your research accessible to the non-specialists who might be reading it.
Several decades ago, I learned the “grandmother” test. If you can’t explain your research clearly and simply to someone outside your immediate field (like maybe your grandmom), it’s probably not clear enough for a review panel where only one or two people are genuine specialists in your exact area.
Here’s my practical advice: Read your overall proposal goals (or aims) out loud. If you stumble over your own sentences, reviewers will too. Remember that if you can’t explain it, you probably don’t understand it well enough yourself. Make the first paragraph of each section a roadmap for what follows. And use jargon only when necessary—when there’s genuinely no more straightforward way to say it.
I once reviewed a proposal with brilliant research that was nearly incomprehensible to anyone outside the PI’s subspecialty. The same panel reviewed another proposal that explained equally complex ideas with straightforward language. Guess which one got funded?
Lesson 2: Preliminary Data Is About Trust, Not Volume
Early in my career, I believed more data equaled a stronger proposal. Fill those pages with figures! Show them everything you’ve got! Every additional graph strengthens your case, right?
Wrong. It’s the quality of the data that counts.
Here’s what preliminary data does: it answers the question “Can this PI execute what they’re proposing?” It’s not about impressing reviewers with how much you’ve already accomplished. It’s about building trust that you can deliver on your promises. And here’s the thing that surprised me most: including the wrong preliminary data raises more questions than having no data at all.
Show that you can execute the specific methods you’re proposing. Demonstrate the feasibility of your key innovation—the part that’s novel and risky. If you don’t have the correct preliminary data yet, address that gap head-on rather than papering over it with tangentially related work.
The deeper insight here is that reviewers are assessing risk. They’re not asking, “Do you have data?” They’re asking, “Do I trust you can deliver what you’re promising with taxpayer money?” Those are fundamentally different questions.
Lesson 3: Broader Impacts Require Situational Awareness
I initially treated broader impacts as a required checkbox. Standard language about societal benefits and outreach seemed perfectly adequate—everyone writes similar things, right? Just describe some plausible activities and move on.
Reviewers can spot boilerplate instantly. We’ve read hundreds of proposals with identical broader impacts sections, and they all blur together into meaningless noise.
The best broader impacts sections connect to who you are and what you’re genuinely already doing in ways that align with the nation’s best interests. Integration with your research and your actual life matters far more than ambitious plans that sound good on paper.
Scaling is essential: build on what you’re already doing rather than inventing entirely new programs you’ll never have time to implement. Be specific rather than grandiose. If you already mentor undergrads in your lab, explain how this project will train them in new techniques. If you have existing connections to a local K-12 program, describe how you’ll use them—don’t manufacture new partnerships from whole cloth.
Here’s the tell: “We will develop outreach materials” raises immediate skepticism. But “I teach a summer workshop at Lincoln High School’s science program—this research will provide three new hands-on modules on climate modeling” builds trust. One is a vague promise. The other is a concrete plan rooted in existing relationships.
Lesson 4: Budget Justification Actually Matters
I used to think budgets were purely administrative. Surely reviewers barely glanced at them—they cared about the science, not the accounting, right? Standard rates and percentages seemed perfectly sufficient.
Reviewers absolutely read budget justifications. We look for alignment between what you’re proposing to do and what you’re proposing to spend. Misalignment raises immediate red flags. And here’s something that surprised many junior faculty I’ve mentored: over-budgeting is just as problematic as under-budgeting.
Every major budget line should connect clearly to a specific aim in your proposal. Justify why you need that piece of equipment—what will it do that your existing infrastructure can’t? Personnel effort should match the work described. If you’re requesting 50% effort for a postdoc, reviewers should see that postdoc playing a central role in half your aims.
Red flags I’ve seen repeatedly: proposing ambitious international fieldwork with minimal travel budget or requesting full postdoc salary when the proposal’s narrative gives that postdoc almost nothing to do. These inconsistencies make reviewers wonder whether you’ve really thought through how the work will get done.
Lesson 5: How You Handle Weaknesses Reveals Everything
I once believed you should never acknowledge limitations. Defend every choice—project confidence at all costs. Any admission of weakness would be seized upon by reviewers looking for reasons to reject your proposal.
This might be the lesson I wish I’d learned earliest. Reviewers already see the weaknesses in your proposal. Pretending problems don’t exist destroys your credibility far more than the limitations themselves.
How you address limitations reveals your scientific maturity. Acknowledge real problems early and directly. Then explain your mitigation strategy: “If plan A fails, we will try plan B because…” Show you’ve thought through alternatives and have realistic contingency plans.
This lesson became even clearer when I started seeing resubmissions. The response letter matters as much as the revised proposal itself. A defensive tone—arguing with reviewers, insisting they misunderstood you—equals instant rejection. But a response that says “We appreciate the panel’s insights. We have substantially revised Section 2.3 to address concerns about statistical power. New preliminary data (Figure 3) demonstrates feasibility of the alternative approach” shows growth and responsiveness.
Panels respect PIs who demonstrate scientific judgment far more than those who claim perfection. We know perfect proposals don’t exist. We want to see that you can identify problems and solve them.
Lesson 6: The Human Element of Review
I believed grant review was a purely objective, data-driven process where careful reviewers gave equal attention to every proposal, systematically evaluating each against clear criteria.
Reviewers are human. They’re tired. They’re distracted. They have bad days. Panel dynamics matter—who speaks up first, who’s respected, who’s combative. Your proposal isn’t evaluated in isolation; it competes with the others in that review cycle, and comparison effects are real even if the program officers say it shouldn’t be.
Here’s the practical reality: reviewers read proposals at night, on weekends, while traveling. They’re squeezing this work into already overwhelming schedules. If they’re confused by page two, they may never fully engage with your brilliant idea on page eight. Your first page matters disproportionately.
Make your innovation immediately clear. Give reviewers ammunition to advocate for you in panel discussions—clear summary statements they can quote, compelling preliminary data they can point to. The discussant’s job is to convince other panelists to fund your work. Make their job easy.
This isn’t unfair. It’s simply reality. Design your proposal for the actual conditions of review, not the idealized version where everyone reads every word with perfect attention on a quiet Sunday morning with fresh coffee.
Lesson 7: Resubmissions Are About Demonstrating You Listened
I initially thought resubmissions were second chances to explain myself better. The reviewers had clearly misunderstood my brilliant idea. Now I’d show them what I really meant, with more precise explanations and stronger arguments.
Resubmissions are about showing scientific growth. They demonstrate whether you can receive criticism, integrate feedback, and improve your work. The reviewers weren’t wrong—or at least, whether they were wrong doesn’t matter. What matters is whether you can respond constructively to their concerns.
Start your response letter with genuine gratitude, not perfunctory politeness. Group your responses to criticisms thematically rather than addressing them line by line, which makes you look defensive. Show clearly what you changed and where reviewers can find those changes in the revised proposal. If you genuinely disagree with a criticism, do so respectfully and support your position with data, not rhetoric.
The successful resubmissions I’ve seen follow a pattern: acknowledge the feedback, explain the changes, demonstrate improvement with new evidence. The unsuccessful ones argue, defend, and explain why the reviewers didn’t understand the first time.
What These Lessons Reveal About Science Funding
These aren’t just tips for better grant writing. They reveal something more profound about how American science funding works. As I’ve written before, the current system prioritizes risk mitigation over bold ideas. It values clear communication and demonstrates competence over theoretical brilliance. It rewards incremental progress from established investigators more readily than moonshots from newcomers.
I’m not criticizing (this time)—it’s how the system is designed. When you’re allocating hundreds of millions in taxpayer dollars, trust and deliverability matter. Understanding this cultural logic helps you work within the system more effectively.
And it raises an interesting question I’m exploring in my new work on international science policy: Do other countries fund science differently because they assess risk differently? Do European or Asian funding systems embed different assumptions about what science should accomplish? That’s a topic for future posts.
These lessons came from mistakes, from failed proposals, from thousands of hours in review rooms watching good science get rejected for preventable reasons. I wish I’d understood them earlier in my career. I’m offering them now to help you avoid the same learning curve.
What hard-won lessons have you learned about grant writing? What advice would you give your younger self? Share in the comments.
As you prepare your 2026 submissions, remember there’s a human being on the other side of your proposal. Make their job easier. Help them advocate for your science. Give them reasons to say yes.
