Concerns Arise Over AI-Generated Content Flooding Academic Journals
ICARO Media Group
In a rapidly evolving digital landscape, the influx of artificial intelligence (AI)-generated content is raising concerns about the integrity of academic research and the erosion of trust in the scientific community. Scientific sleuth Elisabeth Bik has sounded the alarm, pointing out that an increasing number of AI-generated images and text are finding their way into academic papers, with potentially damaging consequences.
Recent examples of AI-generated content in scientific journals have highlighted the growing wave of AI-generated text and images that permeate the academic publishing industry. Infographics depicting a rat with an impossibly large appendage and human legs with extra bones have drawn attention to the problem. These examples have exposed the existing flaws in the multi-billion-dollar sector, which have been further amplified by the adoption of AI.
Experts who specialize in identifying issues in academic studies assert that the rise of AI has exacerbated the problems within the industry. They highlight the importance of thoroughly checking and disclosing the use of AI programs, like ChatGPT, which can be useful for writing and translating papers. However, several recent cases have exposed the failure of peer review processes to detect and address AI-generated content.
A significant incident this year involved the publication of a clearly AI-generated graphic in a journal affiliated with academic giant Frontiers. The graphic depicted a rat with exaggerated reproductive organs and garnered widespread attention on social media before the study was retracted. Another study, published last month, was retracted due to an AI-generated graphic showcasing legs with peculiar multi-jointed bones resembling hands.
While these instances center around images, it is believed that ChatGPT, a chatbot introduced in November 2022, has substantially altered how researchers present their findings. In one notable case, an introduction in a study published by Elsevier went viral for being clearly derived from a ChatGPT prompt reading, "Certainly, here is a possible introduction for your topic." These embarrassing examples are relatively rare and are unlikely to survive the rigorous peer review process of esteemed journals, according to experts who spoke to AFP.
Nevertheless, spotting the use of AI in academic papers is not always straightforward. One indication is the overuse of specific words favored by ChatGPT, such as meticulous, intricate, or commendable. Andrew Gray, a librarian at University College London, undertook a painstaking search through millions of papers and identified that over 60,000 papers in 2023 incorporated AI-a proportion more than one percent of the total annual output. Gray predicts a significant increase in this figure for 2024.
The concerns around AI in academic publishing extend beyond the detection of overused words. It has allowed malicious actors to "industrialize the overflow" of low-quality or fake papers, including those generated by infamous paper mills. These unscrupulous entities offer authorship to researchers and flood the academic sphere with a deluge of subpar, plagiarized, or fraudulent publications. Acclaimed Dutch researcher Elisabeth Bik, who specializes in identifying scientific image manipulation, estimates that paper mills are responsible for publishing around two percent of all studies, a number that is rapidly growing due to the unleashed potential of AI.
This problem was underscored when publishing powerhouse Wiley acquired the troubled publisher Hindawi in 2021. Since the acquisition, Wiley has retracted over 11,300 papers linked to special issues of Hindawi. To combat AI misuse, Wiley has introduced a "paper mill detection service" powered by AI, further demonstrating the complex interplay between technology and academic publishing. However, Retraction Watch co-founder Ivan Oransky emphasizes that the issue is not limited to paper mills but represents a broader academic culture that prioritizes quantity over quality, leading researchers to relentlessly produce scholarly outputs.
While there are valid reasons to employ AI, such as time-saving benefits or language translation tools that aid non-English-speaking researchers, there are growing concerns about potential errors, inventions, and inadvertent plagiarism stemming from AI systems. Recently, a researcher uncovered a ChatGPT re-written version of his own study published in an academic journal. Samuel Payne, a bioinformatics professor at Brigham Young University, rejected the paper after realizing it was a "100 percent plagiarism" of his work, with the text seemingly rephrased by an AI program. Payne was shocked to discover that the plagiarized work had simply been published in a new Wiley journal called Proteomics.
The broad implications of AI-generated content in academic publishing extend beyond individual cases of plagiarism or retracted papers. There is a genuine fear that the increasing presence of AI in research may erode society's trust in scientific advancements. As the academic community navigates this new terrain, finding a delicate balance between leveraging the benefits of AI while preserving the integrity of research and maintaining public trust remains a pressing challenge.