Skip to Main Content
Welcome to the Scientist.com Marketplace

Go to Main Navigation

Will Scientists Be Replaced by AI? How Generative AI Is Shaking Up Scientific Publication


A January 12, 2023 headline in Nature reads, “Abstracts written by ChatGPT fool scientists: Researchers cannot always differentiate between AI-generated and original abstracts.”1 Barely a week later, a new Futurism article headline arrives: “A New Scientific Paper Credits ChatGPT AI as a Coauthor.”2 From these flashy titles, readers are led to believe that AI has taken over the compositional aspects of research. But in reality, are traditional writing methods for scientific manuscripts becoming obsolete, and is this a challenge to originality and reproducibility of publications?

So, what is “generative AI?”

ChatGPT is one of many generative AI softwares built from novel language learning model (LLM) technology, conceptually framed as a neural network that recognizes textual – and sometimes visual – patterns. With these patterns, responses are strung together by computing the word or phrase that has the highest probability of occurring in real human conversation. ChatGPT-3 hits the mark. User-inputted prompts are met with approachable, highly-malleable and concise responses, which get increasingly more accurate as prompts become more specific. ChatGPT-3’s creator, OpenAI, claims their mission is to advance “the social and economic benefits of AI technology” for “all of humanity,” and in many ways their intentionally conversational AI model could be one step in the right direction.3

The potential advantages for AI aren’t overstated

For research, it’s easy to recognize the advantages. Generative AI has the potential to dramatically speed up the writing process for scientific papers, especially in introduction and background sections, so scientists can spend more time frying the bigger fish – coming up with conceptual mechanisms and testing ideas. Researchers, especially those in academia, are already overworked as it is, so AI putting on a few of the hats that a researcher wears is a significant benefit. Beyond efficiency, ChatGPT and other LLMs can create equity between English-speaking and non-English-speaking researchers — “98% of publications in science are written in English.”4 Thus, the translation and predictive function of these models can make it easier for non-English-speakers to jump over the conscious and subconscious language barriers to entry of many journals. Instead of paying for human translation services and agonizing over the complex grammatical convention of English, non-English researchers can redirect energy towards more fruitful intellectual pursuits. Moreover, the generative capabilities of AI in response to prompts doesn’t simply include text. They also include images. The creation of images in response to a prompt could be quite helpful in creating scientific figures, eliminating the need to use complex illustration software or commissioning animation services entirely.

But, are there any downfalls?

Using generative AI technology isn’t so clean cut. Generative AI is prone to what the industry calls “hallucinations,” in which a language model “writes plausible-sounding but incorrect or nonsensical answers.”3,5 All generative AIs, including ChatGPT-3, which broadly survey the internet for common threads, are prone to making uncredited and unbacked claims. The AI response could be a biased Frankenstein of previous literature or, worse, directly plagiarized without the user’s knowledge. It takes a subject expert to catch and edit errors in the model’s responses. The raw responses alone are simply not sufficiently vetted to be copied from point A (the model) to point B (a manuscript). Taken together, this could be detrimental when used in a scientific setting, where every statement is carefully paper-trailed by cited literature and made to be reproduced. Peer reviews will be made more difficult, methods will be harder to replicate and the scientific logic behind a particular study can be even harder to trace.

AI-generated text has been completely banned by Science, a decision made by editor-in-chief Holden Thorp, and faces serious uphill battles at other publication sites. Elsevier explicitly states on their ethics webpage that all generative AI use must be disclosed and can only be used to “improve [the] readability and language of the work.”6 To put it generously, the fears that these well-respected scientific journals have are not unfounded. AI may well be able to delegitimize real research and flood the publication market with scientifically-unsound, misinformative and unreproducible scientific reports.

Unfortunately, policies regarding AI use are difficult to enforce. A Northwestern University research team led by Dr. Catherine Gao ran ChatGPT abstracts through a plagiarism checker, and the abstracts passed with flying colors – a median originality score of 100%. Furthermore, the software only “correctly identified 68% of generated abstracts as being generated, and…86% of original articles as being original,” while being constantly plagued with false positives and false negatives.1 OpenAI’s own AI-text checker, ChatGPT-2, is only particularly useful when the text has been directly created by AI and unedited by human hands. Deeming whether something has been written by AI or not doesn’t exactly have a perfect formula yet, so pure honesty, by default, is the only policy.

Consumers remain hungry for more

As the old adage goes, “out with the old, in with the new.” Despite the downfalls, the consumer hunger for innovative AI technology has left tech giants scrambling to invest. According to a report by Market.us, the generative AI market is projected to grow to a 151.9 billion USD industry “by 2032.”7 Additionally, Australia’s national government science agency, Commonwealth Scientific and Industrial Research Organisation (CSIRO), claims that “nearly 98% of scientific fields have already implemented AI in some capacity.”8 It is impossible to ignore the rapid growth of AI in every commercial and private sector, but for now, considering the disadvantages, our jobs as researchers are safe.

References
  1. https://www.nature.com/articles/d41586-023-00056-7
  2. https://futurism.com/scientific-paper-credits-chatgpt-ai-coauthor
  3. https://openai.com/about
  4. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0238372
  5. https://www.biorxiv.org/content/10.1101/2022.12.23.521610v1.full.pdf
  6. https://www.elsevier.com/about/policies/publishing-ethics
  7. https://www.globenewswire.com/news-release/2023/04/03/2639263/0/en/Generative-AI-Market-Observes-Strong-Growth-Potential-With-Projected-Market-Size-of-USD-151-9-Bn-by-2032.html
  8. https://venturebeat.com/ai/harnessing-the-power-of-gpt-3-in-scientific-research/