The technology threatens to impoverish research and destroy humans’ ability to understand the social world, says Dirk Lindebaum
Since the launch of ChatGPT at the end of November last year, papers and articles on the possibilities and perils of the sophisticated chatbot for teaching and pedagogy have become a dime a dozen. But its effect on research is likely to be no less profound.
Much has been written about the surprising ability of LLMs such as ChatGPT, as well as its troubling bloopers and hallucinations. This piece observes that researchers in using such technology are in fact training technology that will ultimately replace them. Even if this characterisation is accurate, it seems like its evolution and refinement will not be slowed if even if a majority of researchers do not use it. The reality is that we all need to get much better at utilising theses tools, so that they boost our creative and scientific work. The prospect of Artificial Intelligence of being able to replicate this work seems a long way off.
The first is that using the technology to compile literature reviews will impoverish our own analytical skills and theoretical imagination. When we write our own literature reviews, we read for understanding: we seek to know more than we did before through the power of our own minds. This involves a willingness to overcome the initial in equality of understanding that can exist between reader and author (such as when a PhD student reads a monograph in preparation for a first-year report). And the effort enables us to see and make new theoretical connections in our work.
But ChatGPT can’t understand the literature: it can only predict what the statistical likelihood is of the next word being “a” rather than “b”. Hence, the literature reviews it produces merely offer up food for thought that is past its best-before date given that the training data are not necessarily current. This is why some commentators have described ChatGPT’s knowledge production as occurring “within the box”, rather than outside it.
Being able to understand the current literature and to harness the imagination is crucial for linking observed phenomena with theoretical explanations or understanding for improved future practice. The risk is that an over-reliance on ChatGPT will deskill the mental sphere, leaving us poorly equipped when we need solutions to novel, difficult problems.