It feels as though during the COVID pandemic, the red-faced screaming over what is true and conspiracy theories got much worse, just at a time when clarity and sober reasoning were needed. Listening and treating our opponents with compassion were in short supply. In this context, scientists admitting they made a mistake and acting to correct the scientific record can seem to require Herculean courage. This piece published by Science looks at the forces in play and why we should all aspire to such courage.
(Germany) An analysis of functional relationships between systemic conditions and unethical behavior in German academia (Preprint: Nicole Bossel-Debbert et. al. | November 2023)
Abstract This paper is an updated English version of a report filed by a commission that the German Psychological Society
Retractions occur for several reasons, some related to research misconduct (such as plagiarism, fabrication and falsification). The continued citation of retracted work is a significant and severe concern. The fact that this is occurring means the body of scientific knowledge is being compromised and polluted. Steps must be taken before we cite work, to ensure it has not been retracted. This item, published in November 2023, looks at this problem in philosophy.
Should scientists include their race, gender, or other personal details in papers? – Science (Rachel Zamzow | November 2023)
We are aware of the argument for diversity in the participant cohorts for clinical trials and clinical research (which is essential for the capability and veracity of trials/projects. We know the arguments about peer reviewers and diversity (being alert parachute research and the voice of marginalised communities). We strongly support the calls for constructive change in these important areas. It is definitely time for change. We are less familiar with the arguments for diversity and identity statements regarding authors. This excellent piece presents both sides of the argument in this case. We can see the argument for describing the lived experience of authors when they are talking about a particular demographic, but we are not convinced of this as general requirement.
This piece that Mother Jones published in November 2023 takes a look at the question of the reason why so many research outputs are being retracted. As this item observes, despite startling international cases, retractions still amount to much less than 1% of the published work. Even with the number Retraction Watch co-founder Ivan Orasky believes are likely to require retraction, it is an incredibly small proportion of the total volume of published academic work. This piece discusses rewarding/funding the sleuths who detect and call out dodgy papers.
Plagiarism by academics is serious. Any excuses had better be good – Times Higher Education (August 2023)
The national approach to research integrity in many jurisdictions classifies plagiarism as a serious breach of responsible research standards. As such, a person who appears to have committed plagiarism has committed research misconduct and should be held accountable. Nevertheless, this Times Higher Education story, published in August 2023, discusses how research institutions can tie themselves in knots by downplaying plagiarism by staff and treating such behaviour in a very different way in which they treat plagiarism by students.
(US) Co-developer of Cassava’s potential Alzheimer’s drug cited for ‘egregious misconduct’ – Science (Charles Piller | October 2023)
City University of New York’s Hoau-Yan Wang couldn’t provide original data to refute allegations of image manipulation, university says Cassava
Signs of undeclared ChatGPT use in papers mounting – Reaction Watch (Frederik Joelving | October 2023)
The use of artificial intelligence systems, such as ChatGPT in the writing of research outputs with disclosure is a significant concern. Not least, because they do not genuinely understand their instructions, the topic or the text they produce. There is also a good chance that the material will be plagiarised if only compression plagiarism. Institutions, research funding bodies, publications and learned societies need to provide researchers with clear guidance on the use of AI systems such as LLMs in research outputs.