Last week, an environmental journal published a paper on the use of renewable energy in cleaning up contaminated land. To read it, you would have to pay 40 euros. But you still wouldn’t know for sure who wrote it.
Ostensibly authored by researchers in China, “Revitalizing our earth: unleashing the power of green energy in soil remediation for a sustainable future” includes the extraneous phrase “Regenerate response” at the end of a methods section. For those unfamiliar, “Regenerate response” is a button in OpenAI’s ChatGPT that prompts the chatbot to rework an unsatisfactory answer.
“Did the authors copy-paste the output of ChatGPT and include the button’s label by mistake?” wondered Guillaume Cabanac, a professor of computer science at the University of Toulouse, in France, in a comment on PubPeer.
And, he added, “How come this meaningless wording survived proofreading by the coauthors, editors, referees, copy editors, and typesetters?”
The case is the latest example of a growing trend of sloppy, undeclared use of ChatGPT in research. So far, Cabanac, whose work was covered in Nature last month, has posted more than 30 papers on PubPeer that contain those two telltale, free-floating words. And that’s not including articles that appear in predatory journals, the scientific sleuth told Retraction Watch.
“Computer software has been used for decades to support the authors,” Cabanac told us. “Just think about Grammarly or DeepL for people like me. I’m not a native English speaker, so I go to WordReference, I go sometimes to DeepL. But what I do, I look at the result and I correct the mistakes.”
ChatGPT and other tools relying on AI systems known as large language models tend to make things up. As we reported earlier this year, that freelancing can be a problem for researchers looking for help to find references.
“Sometimes it elaborates things that were not in the head of the researchers,” Cabanac said. “And that’s the tipping point to me. When people use the system to generate something that they hadn’t in mind, like fabricating data, generating some text with references to works they didn’t even read, this is unacceptable.”
According to some publishers, chatbots do have legitimate uses when writing papers. The key is to let readers know what was done.
The corresponding author on the environmental paper, Kangyan Li of ESD China Ltd., did not respond to requests for comment. Nor did a contact person listed on his company’s website.
A spokesperson for Springer Nature, which publishes the journal Environmental Science and Pollution Research in which the article appeared, said the publisher was “carefully investigating the issue in line with COPE best practice” but could not share further details at the moment.
How the authors, let alone the journal, could have missed the strange phrase is unclear. “Maybe it’s not about the authors, maybe it involves a paper mill,” Cabanac said, referring to dodgy organizations selling author slots on scientific papers that may contain fabricated data.
He added that he and his frequent collaborator Alexander Magazinov, another sleuth, have found dozens of suspicious papers in Environmental Science and Pollution Research. They notified the journal’s editor-in-chief, Philippe Garrigues of Université de Bordeaux, in France, of the problems last year.
In an email seen by Retraction Watch, Garrigues told Cabanac that he had already taken action and that “this is not over.” Garrigues added (translated from the French):
Believe me, I am well aware of all the problems that can arise in the world of scientific publishing and new ones arise every day. I could write entire books about my experience as an editor of several journals and the cases I encountered. Vigilance and attention must be the rule at all times.
Garrigues did not respond to a request for comment.
“Regenerate response” is not the only sign of undeclared chatbot involvement Cabanac has seen. An even more egregious example is the phrase “As an AI language model, I …,” which he has found in nine papers until now.
Cabanac worries about how such flagrant sloppiness, arguably the tip of the iceberg, can slip past editorial staff and peer reviewers alike.
“These are supposed to be the gatekeepers of science – the editors, the reviewers,” he said. “I’m a computer scientist. I’m not in the business of these journals. Still, I get the red flag, ‘Regenerate response.’ That’s crazy.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on Twitter, like us on Facebook, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.