ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact matches only
Search into
Filter by Categories
Research ethics committees
Research integrity

Resource Library

Research Ethics MonthlyAbout Us

ResourcesAnalysis

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

European universities dismal at reporting results of clinical trials – Nature (Nic Fleming | April 2019)0

Posted by Admin in on September 11, 2019
 

Analysis of 30 leading institutions found that just 17% of study results had been posted online as required by EU rules.

Failing to post the results of a clinical trial is not only a technical breach, it is a waste of resources, places an unwarranted burden on volunteers, is a waste of resources and is a public health issue.  Does your institution follow-up to check if results have been reported?  Is action taken if it hasn’t?

Many of Europe’s major research universities are ignoring rules that require them to make public the results of clinical trials.

A report published on 30 April found that the results of only 162 of 940 clinical trials (17%) that were due to be published by 1 April had been posted on the European Union’s trials register. The 30 universities surveyed are those that sponsor the most clinical trials in the EU. Fourteen of these institutions had failed to publish a single results summary.

If three high-performing UK universities are excluded from the figures, the results of just 7% of the trials were made public on time. Campaigners say the resulting lack of transparency harms patients by undermining the efforts of doctors and health authorities to provide the best treatments, slows medical progress and wastes public funds.

Read the rest of this discussion piece

Why we shouldn’t take peer review as the ‘gold standard’ – The Washington Post (Paul D. Thacker and Jon Tennant | August 2019)0

Posted by Admin in on September 10, 2019
 

It’s too easy for bad actors to exploit the process and mislead the public

In July, India’s government dismissed a research paper finding that the country’s economic growth had been overestimated, saying the paper had not been “peer reviewed.” At a conference for plastics engineers, an economist from an industry group dismissed environmental concerns about plastics by claiming that some of the underlying research was “not peer reviewed.” And the Trump administration — not exactly known for its fealty to science — attempted to reject a climate change report by stating, incorrectly, that it lacked peer review.

Researchers commonly refer to peer review as the “gold standard,” which makes it seem as if a peer-reviewed paper — one sent by journal editors to experts in the field who assess and critique it before publication — must be legitimate, and one that’s not reviewed must be untrustworthy. But peer review, a practice dating to the 17th century, is neither golden nor standardized. Studies have shown that journal editors prefer reviewers of the same gender, that women are underrepresented in the peer review process, and that reviewers tend to be influenced by demographic factors like the author’s gender or institutional affiliation. Shoddy work often makes it past peer reviewers, while excellent research has been shot down. Peer reviewers often fail to detect bad research, conflicts of interest and corporate ghostwriting.

Meanwhile, bad actors exploit the process for professional or financial gain, leveraging peer review to mislead decision-makers. For instance, the National Football League used the words “peer review” to fend off criticism of studies by the Mild Traumatic Brain Injury Committee, a task force the league founded in 1994, which found little long-term harm from sport-induced brain injuries in players. But the New York Times later discovered that the scientists involved had omitted more than 100 diagnosed concussions from their studies. What’s more, the NFL’s claim that the research had been rigorously vetted ignored that the process was incredibly contentious: Some reviewers were adamant that the papers should not have been published at all.

Read the rest of this discussion piece

How often do authors with retractions for misconduct continue to publish? – Retraction Watch (Ivan Oransky | May 2019)0

Posted by Admin in on September 8, 2019
 

How does retraction change publishing behavior? Mark Bolland and Andrew Grey, who were two members of a team whose work led to dozens of retractions for Yoshihiro Sato, now third on the Retraction Watch leaderboard, joined forces with Vyoma Mistry to find out. We asked Bolland to answer several questions about the new University of Auckland team’s paper, which appeared in Accountability in Research.

Retraction Watch (RW): You “undertook a survey of publication rates, for authors with multiple retractions in the biomedical literature, to determine whether they changed after authors’ first retractions.” What did you find?

Mark Bolland (MB): We wondered whether people continue to publish after they have had more than one of their papers retracted. We identified 100 authors with more than one first-author retraction from the Retraction Watch database (the top 10 from the Retraction watch leaderboard, 40 with at least 10 retractions, and 50 with 2-5 retractions). 82 authors were associated with a retraction in which scientific misconduct was listed as a reason for retraction in the Retraction Watch database.

Read the rest of this discussion piece

Doing the right thing: Psychology researchers retract paper three days after learning of coding error – Retraction Watch (Adam Marcus | August 2019)0

Posted by Admin in on August 21, 2019
 

The news you’ve made a critical error in the analysis of a project’s data can be devastating.  Particularly given the career harming consequences that can be associated with retractions.  So, like Retraction Watch, we congratulate this psychology team for their prompt and responsible actions.

We always hesitate to call retraction statements “models” of anything, but this one comes pretty close to being a paragon.
.

Psychology researchers in Germany and Scotland have retracted their 2018 paper in Acta Psychologica after learning of a coding error in their work that proved fatal to the results. That much is routine. Remarkable in this case is how the authors lay out what happened next.
.

The study, “Auditory (dis-)fluency triggers sequential processing adjustments:”
.

investigated as to whether the challenge to understand speech signals in normal-hearing subjects would also lead to sequential processing adjustments if the processing fluency of the respective auditory signals changes from trial to trial. To that end, we used spoken number words (one to nine) that were either presented with high (clean speech) or low perceptual fluency (i.e., vocoded speech as used in cochlear implants-Experiment 1; speech embedded in multi-speaker babble noise as typically found in bars-Experiment 2). Participants had to judge the spoken number words as smaller or larger than five. Results show that the fluency effect (performance difference between high and low perceptual fluency) in both experiments was smaller following disfluent words. Thus, if it’s hard to understand, you try harder.
.

Read the rest of this discussion piece

0