ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesControversy/Scandal

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

They agreed to listen to a complaint about a paper. Then the harassment began – Retraction Watch (Alison McCook | March 2017)0

Posted by Admin in on June 5, 2017
 

We receive our fair share of tips, and most are well-intentioned attempts to clean up the scientific literature. However, sometimes would-be critics can veer into personal attacks. As chair of the Committee on Publication Ethics, Virginia Barbour has seen a lot. But nothing quite prepared her for being cyberbullied by someone the organisation had agreed to listen to when they raised a complaint about a published paper. In this guest post, Barbour tells the story of how COPE’s attempts to assist led to hundreds of harassing emails and unfounded accusations of a cover-up, which the complainant spread indiscriminately.

The AHRECS team know Ginny, respect her work and respect the contribution COPE makes to the research integrity sphere, so we found this account doubly concerning.

By its very nature, publication and research ethics often includes issues that are hard to resolve and it’s not uncommon for journals to receive concerns from individuals about specific papers. COPE has guidance for its members on what to do when they are contacted by such individuals. We urge and support editors and publishers in taking issues raised seriously. Nonetheless, such individuals (whether anonymous or not) can experience difficulties in getting their cases heard and, in rare and unusual cases, face extreme measures to silence them.
.
At COPE, we therefore also have a mechanism whereby readers can raise concerns about an issue in a COPE member journal, if the journal and publisher have not been able to resolve the issue. We have devoted increasing resources to this mechanism, even though is not the primary reason for which COPE was set up. As a membership organisation, COPE does not have regulatory authority over journals or publishers, but we can review the process the journal or publisher followed to determine if best practice was followed.
.

Read the rest of this discussion piece

Nightmare scenario: Text stolen from manuscript during review – Retraction Watch (Victoria Stern | March 2017)0

Posted by Admin in on May 27, 2017
 

A food science journal has retracted a paper over “a breach of reviewer confidentiality,” after editors learned it contained text from an unpublished manuscript — which one of the authors appears to have reviewed for another journal.

This awful case highlights the importance of professional development relating to the conduct of peer reviews and underlines the importance of good communication between collaborating researchers.

The publisher and editors-in-chief of the Journal of Food Process Engineering became aware of the breach when the author of the unpublished manuscript lodged a complaint that his paper, under review at another journal, had been plagiarized by the now retracted paper.
.
We’re hazy on a few details in this case. Although the journal editor told us the “main author” of the retracted paper reviewed the original manuscript for another journal, the corresponding author of the retracted paper said he was not to blame. (More on that below.)
.

Read the rest of this news story

Some Social Scientists Are Tired of Asking for Permission – The New York Times (Kate Murphy | May 2017)0

Posted by Admin in on May 25, 2017
 

Sometimes a change to national policy isn’t enough to alter institutional practice – especially when that practice has been entrenched for a few decades and is wrapped in institutional risk. This New York Times story highlights why there’s so much chatter around the change to the US ‘Common Rule’.

If you took Psychology 101 in college, you probably had to enroll in an experiment to fulfill a course requirement or to get extra credit. Students are the usual subjects in social science research — made to play games, fill out questionnaires, look at pictures and otherwise provide data points for their professors’ investigations into human behavior, cognition and perception.
.
But who gets to decide whether the experimental protocol — what subjects are asked to do and disclose — is appropriate and ethical? That question has been roiling the academic community since the Department of Health and Human Services’s Office for Human Research Protections revised its rules in January.
.
The revision exempts from oversight studies involving “benign behavioral interventions.” This was welcome news to economists, psychologists and sociologists who have long complained that they need not receive as much scrutiny as, say, a medical researcher.
.

Read the rest of this discussion piece

Four in 10 biomedical papers out of China are tainted by misconduct, says new survey – Retraction Watch (Mark Zastrow | May 2017)0

Posted by Admin in on May 23, 2017
 

Chinese biomedical researchers estimate that 40% of research in their country has been affected in some way by misconduct, according to a new survey.

The authors are quick to caution against putting too much stock in this figure due to the subjective nature of the survey, published in Science and Engineering Ethics. The estimates also spanned a wide range, with a standard deviation of ±24%. But they say that the responses to this question and others on the survey suggest that scientists in the region feel academic misconduct remains a major problem that authorities have failed to adequately address. (Indeed, a recent analysis from Quartz using Retraction Watch data showed that researchers based in China publish more papers retracted for fake peer reviews than all other countries put together.)

The survey was designed by employees at Medjaden, a Hong Kong-based editing company that assists mainland Chinese biomedical researchers publishing in English-language journals. They invited all of their registered users by email to complete two surveys—roughly 10,000 users in 2010 and 15,000 in 2015. Like most online surveys, this one had a low response rate—around 5%, so caveats about sampling bias apply.

Read the rest of this discussion piece

0