ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesResearch Misconduct

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Meet the woman who’s tracking down systematic research fraud – Elsevier (Jennifer A. Byrne and Christopher Tancock | July 2019)0

Posted by Admin in on August 6, 2019

The topic of research fraud is a serious – and growing issue. In this article, we interview Professor Jennifer A. Byrne about her work in identifying systematic fraud, the software she’s helped develop and the pioneering work she’s been doing to promote a better appreciation and regard for the importance of a “clean” body of research literature.

Great Elsevier interview of Jennifer A. Byrne (who recently wrote the great guest post in the Research Ethics Monthly, The F-word, or how to fight fires in the research literature) about her personal drive to take on systematic research fraud.  Well done Jenny!

Tell us a little about your background and research interests.
I’m a molecular biologist and a cancer researcher. My research interests include studying the functions of specific genes in cancer, investigating the genetic basis of childhood cancer predisposition, and studying the operations of cancer biobanks.

How did you begin your work on (systematic) fraud?
This started by accident, when I read five papers about a gene that my team had identified years before. These papers were very similar, even sharing particular nucleotide (or gene) sequence reagents. I could also see that the same reagent was being used in different ways, which couldn’t be right. Further analyses revealed that some reagents were wrongly identified, meaning that some reported results were impossible. When I realised that many other papers had these same types of errors, I fell into a strange new scientific reality, where I’ve been ever since.

Read the rest of this discussion piece

Singapore joins the rise of research integrity networks – Nature Index (Dalmeet Singh Chawla | July 2019)0

Posted by Admin in on August 3, 2019

Global effort to combat research misconduct gathers pace.

Research integrity professionals in Singapore have responded to a high-profile case of research misconduct by launching a professional network to discuss research integrity.

In a scandal that has rocked the island nation’s close-knit research community during the past three years, two researchers at Nanyang Technological University (NTU) had their doctorate degrees revoked after being found guilty of falsifying data.

The scandal led to the retraction and correction of several studies and resulted in Ravi Kambadur, the group’s leader — who had joint appointments at the NTU and the Agency for Science, Technology and Research (ASTAR) — being dismissed for negligence.

Read the rest of this discussion piece

Journals’ Plagiarism Detectors May Flag Papers in Error – The Scientist (Diana Kwon | June 2019)0

Posted by Admin in on August 1, 2019

One recent case, in which a scientist claims his submitted manuscript was rejected despite a lack of actual plagiarism, highlights the limitations of automated tools.

If the researcher’s claims are true, this case points to an uncomfortable situation: Institutional research misconduct approaches need to be more robust and not rely solely on automated detection tools.

Last week, Jean-François Bonnefon, a behavioral scientist at the French Centre National de la Recherche Scientifique, tweeted that a scientific manuscript he submitted to a journal had been rejected by a bot. The program had flagged his paper for plagiarism, highlighting the methods, references, and authors’ affiliations. “It would have taken 2 [minutes] for a human to realize the bot was acting up,” Bonnefon wrote in one of his tweets. “But there is obviously no human in the loop here.”

In a massive Twitter thread that followed, several other academics noted having similar experiences.

“I found [Bonnefon’s] experience quite disconcerting,” Bernd Pulverer, chief editor of The EMBO Journal, writes in an email to The Scientist. “Despite all the AI hype, we are miles from automating such a process.” Plagiarism is a complex issue, he adds, and although tools to identify text duplication are an invaluable resource for routine screening, they should not be used in lieu of a human reviewer.

Read the rest of this discussion piece

Retracted papers die hard: Diederik Stapel and the enduring influence of flawed science (Papers – preprint: Luis Morís Fernández Miguel Vadillo | June 2019)0

Posted by Admin in on July 31, 2019

Self-correction is a defining feature of science. However, science’s ability to correct itself is far from optimal as shown, for instance, by the persistent influence of papers that have been retracted due to faulty methods or research misconduct. In this study, we track citations to the retracted work of Diederik Stapel. These citations provide a powerful indicative of the enduring influence of flawed science, as the (admittedly fabricated) data reported in these retracted papers provide no evidence for or against any hypothesis and this case of fraud was widely known due to the extensive media coverage of the scandal. Our data show that Stapel’s papers are still cited in a favorable way within and without the psychological literature. To ameliorate this problem, we propose that papers should be screened during the review process to monitor citations to retracted papers.

Citation, Retraction, Self-correction, Stapel

Morís Fernández, L., & Vadillo, M. A. (2019, June 19). Retracted papers die hard: Diederik Stapel and the enduring influence of flawed science.
(Pre-print CC)