One recent case, in which a scientist claims his submitted manuscript was rejected despite a lack of actual plagiarism, highlights the limitations of automated tools.
If the researcher’s claims are true, this case points to an uncomfortable situation: Institutional research misconduct approaches need to be more robust and not rely solely on automated detection tools.
.
In a massive Twitter thread that followed, several other academics noted having similar experiences.
.
“I found [Bonnefon’s] experience quite disconcerting,” Bernd Pulverer, chief editor of The EMBO Journal, writes in an email to The Scientist. “Despite all the AI hype, we are miles from automating such a process.” Plagiarism is a complex issue, he adds, and although tools to identify text duplication are an invaluable resource for routine screening, they should not be used in lieu of a human reviewer.
.