ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us


Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Personality and fatal diseases: Revisiting a scientific scandal (Papers: Anthony J Pelosi | February 2019)0

Posted by Admin in on October 14, 2019

During the 1980s and 1990s, Hans J Eysenck conducted a programme of research into the causes, prevention and treatment of fatal diseases in collaboration with one of his protégés, Ronald Grossarth-Maticek. This led to what must be the most astonishing series of findings ever published in the peer-reviewed scientific literature with effect sizes that have never otherwise been encounterered in biomedical research. This article outlines just some of these reported findings and signposts readers to extremely serious scientific and ethical criticisms that were published almost three decades ago. Confidential internal documents that have become available as a result of litigation against tobacco companies provide additional insights into this work. It is suggested that this research programme has led to one of the worst scientific scandals of all time. A call is made for a long overdue formal inquiry.

cancer epidemiology, personality and cancer, personality and heart disease, research ethics, research misconduct

Pelosi, A. J. (2019). Personality and fatal diseases: Revisiting a scientific scandal. Journal of Health Psychology, 24(4), 421–439.
Publisher (Open Access):

What’s next for Registered Reports? – Nature (Chris Chambers | September 2019)0

Posted by Admin in on September 19, 2019

Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.

What part of a research study — hypotheses, methods, results, or discussion — should remain beyond a scientist’s control? The answer, of course, is the results: the part that matters most for publishing in prestigious journals and advancing careers. This paradox means that the careful scepticism required to avoid massaging data or skewing analysis is pitted against the drive to identify eye-catching outcomes. Unbiased, negative and complicated findings lose out to cherry-picked highlights that can bring prominent articles, grant funding, promotion and esteem.

The ‘results paradox’ is a chief cause of unreliable science. Negative, or null, results go unpublished, leading other researchers into unwittingly redundant studies. Ambiguous or otherwise ‘unattractive’ results are airbrushed (consciously or not) into publishable false positives, spurring follow-up research and theories that are bound to collapse.

Clearly, we need to change how we evaluate and publish research. For the past six years, I have championed Registered Reports (RRs), a type of research article that is radically different from conventional papers. The 30 or so journals that were early adopters have together published some 200 RRs, and more than 200 journals are now accepting submissions in this format (see ‘Rapid rise’). When it launched in 2017, Nature Human Behaviour became the first of the Nature journals to join this group. In July, it published its first two such reports1. With RRs on the rise, now is a good time to take stock of their potential and limitations

Read the rest of this discussion piece

European universities dismal at reporting results of clinical trials – Nature (Nic Fleming | April 2019)0

Posted by Admin in on September 11, 2019

Analysis of 30 leading institutions found that just 17% of study results had been posted online as required by EU rules.

Failing to post the results of a clinical trial is not only a technical breach, it is a waste of resources, places an unwarranted burden on volunteers, is a waste of resources and is a public health issue.  Does your institution follow-up to check if results have been reported?  Is action taken if it hasn’t?

Many of Europe’s major research universities are ignoring rules that require them to make public the results of clinical trials.

A report published on 30 April found that the results of only 162 of 940 clinical trials (17%) that were due to be published by 1 April had been posted on the European Union’s trials register. The 30 universities surveyed are those that sponsor the most clinical trials in the EU. Fourteen of these institutions had failed to publish a single results summary.

If three high-performing UK universities are excluded from the figures, the results of just 7% of the trials were made public on time. Campaigners say the resulting lack of transparency harms patients by undermining the efforts of doctors and health authorities to provide the best treatments, slows medical progress and wastes public funds.

Read the rest of this discussion piece

Why we shouldn’t take peer review as the ‘gold standard’ – The Washington Post (Paul D. Thacker and Jon Tennant | August 2019)0

Posted by Admin in on September 10, 2019

It’s too easy for bad actors to exploit the process and mislead the public

In July, India’s government dismissed a research paper finding that the country’s economic growth had been overestimated, saying the paper had not been “peer reviewed.” At a conference for plastics engineers, an economist from an industry group dismissed environmental concerns about plastics by claiming that some of the underlying research was “not peer reviewed.” And the Trump administration — not exactly known for its fealty to science — attempted to reject a climate change report by stating, incorrectly, that it lacked peer review.

Researchers commonly refer to peer review as the “gold standard,” which makes it seem as if a peer-reviewed paper — one sent by journal editors to experts in the field who assess and critique it before publication — must be legitimate, and one that’s not reviewed must be untrustworthy. But peer review, a practice dating to the 17th century, is neither golden nor standardized. Studies have shown that journal editors prefer reviewers of the same gender, that women are underrepresented in the peer review process, and that reviewers tend to be influenced by demographic factors like the author’s gender or institutional affiliation. Shoddy work often makes it past peer reviewers, while excellent research has been shot down. Peer reviewers often fail to detect bad research, conflicts of interest and corporate ghostwriting.

Meanwhile, bad actors exploit the process for professional or financial gain, leveraging peer review to mislead decision-makers. For instance, the National Football League used the words “peer review” to fend off criticism of studies by the Mild Traumatic Brain Injury Committee, a task force the league founded in 1994, which found little long-term harm from sport-induced brain injuries in players. But the New York Times later discovered that the scientists involved had omitted more than 100 diagnosed concussions from their studies. What’s more, the NFL’s claim that the research had been rigorously vetted ignored that the process was incredibly contentious: Some reviewers were adamant that the papers should not have been published at all.

Read the rest of this discussion piece