Five reasons why including ChatGPT in your list of authors is a bad idea
Things have gotten out of control. Take this tweet:
It is time for a more nuanced and thoughtful discussion about ChatGPT. Forget science fiction and the hype for the moment. It is a good tool, but it doesn’t understand what it produces or really understand the question it has been asked. It regurgitates existing material that it has found, from a few sources, but it is still a form of plagiarism. The service can’t take responsibility for what it produces, cannot genuinely meet the criteria for authorship or be held accountable for its outputs. At most, researchers should use it to produce a component of a paper and then be prepared to edit the section heavily. Institution policy, guidance material and professional development must cover this.
ChatGPT hasn’t done the conversion correctly (answer below), but the will to believe in this great new tool, full of precision (down to the second!), but lacking in reliability, is so strong that the author didn’t bother to check. And that’s the problem. Statistical word prediction is no substitute for real math, but more than a few people have thought that it was. (As it happens, a 4 minute 21 second kilometer works out to be a 7 minute mile, if you ask Google nicely, it will actually give you the right answer.)
The worst thing about ChatGPT’s close-but-no-cigar answer is not that it’s wrong. It’s that it seems so convincing. So convincing that it didn’t even occur to the Tweeter to doubt it.
We are going to see a lot of this – misuses of ChatGPT in which people trust it, and even trumpet it—when it’s flat-out wrong.
Wanna know what’s worse than hype? People are starting to treat ChatGPT as if it were a bona fide, well-credentialed scientific collaborator.
Yesterday’s trend was hyping ChatGPT’s alleged Google-killing abilities. Today’s trend is listing ChatGPT as a co-author.