Perspective: How should the advancement of large language models affect the practice of science?

Perspective: How should the advancement of large language models affect the practice of science?

In this thought-provoking Perspective, four sets of authors express their opinions about the use of Large Language Models (such as ChatGPT) in the practice of science. Each essay is well reasoned, and each identifies both strengths and limitations of LLMs. There’s agreement that LLMs are good for some things, such as transcribing audio recordings and searching for related manuscripts. However, these applications are vastly different than some of those proposed for LLMs, such as peer review of research publications, or anticipating how humans might respond to a survey. I encourage you to read this article and discuss it with your students or peers, as it is clearly something we all must be informed about. Personally, I fear that widespread use of LLMs will negatively impact the development of critical reading and writing skills of trainees, but I also worry about the perpetuation of inaccuracies as the products of LLMs feed forward into other LLM-crafted texts. Finally, the broad-scale elimination of scientific jobs occurring presently is in part being justified by the ability of LLMs and other AI approaches to “replace” human intellect, an idea that is morally and intellectually repellant. (Summary by Mary Williams @PlantTeaching.bsky.social) PNAS 10.1073/pnas.2401227121