An AI wrote the ‘Broader Impacts’

As artificial intelligence systems grow more advanced, they are being applied in novel ways that raise ethical questions. One such application is using AI tools to assist in preparing grant proposals for submission to funding agencies like my old agency, the National Science Foundation (NSF). On the surface, this may seem innocuous – after all, AI is just a tool being used to help draft proposals, right? However, utilizing these systems raises several ethical concerns that must be considered.

First, there is the issue of authorship and originality. If an AI system generates significant portions of a grant proposal, can the researchers truly claim it as their own original work? And does it really matter? After all, the currency of science is discovery and the revealing of new knowledge, not filling out another post-doctoral management plan. My own sense is that AI in an assistive role is fine. And at some point in the not too distant future, the AI’s may act more as a partner, as they now do in competitive chess.

Relatedly, the use of AI grant writing tools risks distorting the assessment of the researchers’ true capabilities. Grant reviews are meant to evaluate the creativity, critical thinking, and scientific merit of the applicants. If parts of the proposal are artificially generated, it prevents a fair judgement, undermining the integrity of the review process. Researchers who utilize extensive AI assistance are gaining an unfair advantage over their peers. But what can we do to stop this. It seems to be that the horse has left the barn on that one. Perhaps it’s fairer to assess a scientist in a post-hoc fashion, based on what they’ve accomplished scientifically.

There are also concerns that AI-aided grant writing exacerbates social inequalities. These systems can be costly to access, disproportionately benefiting researchers at elite universities or with ample funding. This widens gaps between the research “haves” and “have nots” – those who can afford the latest AI tools versus those relying solely on their own writing. The playing field should be as even as possible.

Additionally, the use of grant-writing AI poses risks to the advancement of science. If the systems encourage generic proposals repeating current trends, rather than groundbreaking ideas, it could stifle innovation. Researchers may become over-reliant on the technology rather than developing their own critical thinking capacity. Shortcuts to generating proposals can inhibit big picture perspective.

AI is a powerful emerging technology, but it must be utilized carefully and ethically. Researchers adopting these tools have a duty to consider these ethical dimensions. Honesty and transparency about process, maintaining fairness, and upholding originality are critical to preserving research ethics. The ends do not necessarily justify the means – progress must not come at the cost of our values.