Retractions occur for several reasons, some related to research misconduct (such as plagiarism, fabrication and falsification). The continued citation of retracted work is a significant and severe concern. The fact that this is occurring means the body of scientific knowledge is being compromised and polluted. Steps must be taken before we cite work, to ensure it has not been retracted. This item, published in November 2023, looks at this problem in philosophy.
It is not hard to see the signs of peer review in trouble. It is struggling with a crippling workload as the number of new papers for the review grows at an exponential rate. There are also indications that a lack of diversity among reviewers seriously undermines the process. Even though artificial intelligence tools offer a tempting solution, their bloopers and hallucinations are troubling. To date, most of the conversation about peer review and artificial intelligence has related to detecting the undisclosed use of AI tools in preparing outputs, but with much less said about its use in the conduct of peer reviews. This thought-provoking piece, published by Inside Higher Ed in October 2023, looks at whether AI can be helpful if it is used to support the work of peer reviewers.
Chefs Panel Discusses AI, Integrity and Open Content in Frankfurt – The Scholarly Kitchen (Todd A Carpenter | October 2023)
A well-informed, thoughtful and useful discussion about AI and scientific publishing: this is a very useful read for all stakeholders in the scientific publishing space. Research institutions should have guidance material on this topic. This item can be usefully linked to from your guidance material. AHRECS would be delighted to deliver a short online presentation to your institution on this matter. Contact us at email@example.com to discuss.
(UK) Create PhD databases to flush out fraudsters, universities told – Times Higher Education (Jack Grove | October 2023)
This idea from the UK is excellent and is well worth being adopted by institutions in other jurisdictions. We can see such databases being useful beyond confirming the bona fides of job applicants, and we can see other positive uses – such as checking that a dissertation is original work and not plagiarised from an earlier students work.
Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect – WIRED (Amanda Hoover | August 2023)
Elsevier does not prohibit the use of ChatGPT and other LLMs but does require their use to be disclosed. This story published in WIRED suggests that their undisclosed use is seeping into scientific publishing and we may not have an adequate defence. This case does beg the question, why did the peer reviewers and editors miss the obvious flag?
Plagiarism by academics is serious. Any excuses had better be good – Times Higher Education (August 2023)
The national approach to research integrity in many jurisdictions classifies plagiarism as a serious breach of responsible research standards. As such, a person who appears to have committed plagiarism has committed research misconduct and should be held accountable. Nevertheless, this Times Higher Education story, published in August 2023, discusses how research institutions can tie themselves in knots by downplaying plagiarism by staff and treating such behaviour in a very different way in which they treat plagiarism by students.
Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing (Papers: Gregory E. Kaebnick et. al. | October 2023)
Abstract Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors
How ChatGPT and other AI tools could disrupt scientific publishing – Nature (Gemma Conroy | October 2023)
This interesting piece, published by Nature in October 2023, reflects on why we should be cautious about the use of ChatGPT, LLMs and other artificial intelligence systems in scientific writing. It also reflects on the emerging future prospects for academic publishing. As we have noted before, research funding bodies, publications, institutions and learned societies should provide guidance on the responsible use of such systems in research writing. This should be thoughtful, nuanced and centred on practice, rather than absolute rules to always be followed. This landscape is complicated further because it does not appear that there is a reliable way to detect if an artificial intelligence system has been used in the writing of an output.