TrendPulse Logo

Meet the academics refusing to use generative AI

Source: NatureView Original
scienceMay 5, 2026

-

Email

-

Bluesky

-

Facebook

-

LinkedIn

-

Reddit

-

Whatsapp

-

X

Find a new job

Some researchers who refuse to use AI have been accused of being anti-progress — similar to the nineteenth-century Luddites who resisted the new machinery they feared would replace their jobs — but they say their views are more nuanced than that.Credit: Chronicle/Alamy

Danielle Crowley is getting tired of people telling her to use generative artificial intelligence (genAI). As a marine zoologist at Bangor University, UK, she says that she is pretty much the only PhD student in her cohort who does not use it. She has seen colleagues use genAI tools for coding and for getting the tone of e-mails right. On one occasion, she was even encouraged by a lecturer to use it to generate a conference poster.

She says her colleagues are often surprised to hear she hasn’t tried it and have suggested she uses it for applications such as coding. “I’ve had a lot of people go like ‘oh but you have to use it’,” she recalls. But Crowley has her reasons. She has concerns about the ethics of copyright, what she calls a lack of transparency from companies about how they’re using the data, the environmental effects of AI tools and the accuracy of what genAI models spit out.

She also thinks that using the tools would be counterproductive to her studies. “Coding is a skill I want to learn and develop, because it’s not the thing I’m the most confident in,” she says. She would rather try and do it herself, learning from her mistakes.

Marine biologist Danielle Crowley has concerns around the ethics and environmental impacts of generative AI tools.Credit: Laura Oatley

GenAI has become a hot topic over the past few years, as technology companies compete to release the most impressive model for public use. Researchers are using these tools for tasks such as writing papers, peer review and coding. It can save them time, mental energy and sometimes money. But Crowley and others who are purposefully abstaining often find themselves judged by their peers.

“A lot of people say ‘it’s the future, everyone is using it’,” she says. Not using it, she continues, “kind of feels like showing up to a function and saying you don’t drink”.

Efficient, but at what cost?

According to a Nature survey of about 5,000 researchers published in May last year, scientists are split on the ethics of AI use in academia. More than 90% of respondents felt it was acceptable to use AI for editing or translating their own text, but fewer were open to the idea of using it to generate text directly. And only a minority said they had actually used AI tools in their work. About one-quarter of respondents used them to edit their papers, whereas only 8% had used them to translate, summarize or write a first draft.

More recently, a survey of 3,234 researchers published last November by the academic publisher Elsevier found that 58% of researchers used AI in their work, up from 37% the previous year. In terms of how researchers use or would like to use AI tools, 61% said to locate new research, 51% said for collecting and summarizing literature and 41% said for preparing grant applications. Those surveyed were generally positive about the potential of the technology to boost efficiency.

Hugh Possingham, a mathematician and conservation scientist at the University of Queensland in Brisbane, Australia, is among the researchers who are not using AI. He has made a conscious effort to avoid any sort of genAI — instead pledging on LinkedIn to rely on “natural stupidity”.

As a mathematician, Hugh Possingham has seen examples of ‘hallucinations’ in AI-produced writing.Credit: Queensland Government (CC BY SA 4.0)

“I’ve never used any of them at all,” he says. Even though AI has become integrated into many everyday functions, he’s never clicked the button that generates or summarizes text when writing an e-mail, for example.

He complains especially about the errors he has spotted in AI-produced writing. AI sometimes hallucinates: providing false or misleading information with conviction. “I read a master’s thesis where the person cited had died ten years before the paper was published, which is a masterful act,” he says.

Why universities need to radically rethink exams in the age of AI

Meet the academics refusing to use generative AI | TrendPulse