ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us


Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

A fascinating history of clinical trials from their beginnings in Babylon – Medium (Prof. Adrian Esterman | April 2020)0

Posted by Admin in on June 19, 2020

Clinical trials are required to test treatments for COVID-19. Take a quick trip over 2,000 years and discover how our current understanding of clinical trials was formed.

Clinical trials
Clinical trials are currently being undertaken to test treatments and vaccines for COVID-19. There are many different types of clinical trial design, from a simple before and after (measure something in patients, do an intervention like giving them a drug, then measure them again), to a randomized controlled trial, the gold standard of all clinical trial designs.

Planning to give a talk about clinical trials and want to give it some historical context?  This is a great resource to use.

Here is a light-hearted history of how clinical trials developed over the last two thousand years, including the first recorded instances of control groups, the use of placebos and randomization. It will give you a better understanding of how clinical trials are designed.
600 BC Daniel and his kosher diet.

Surprisingly, the first ever clinical trial is found in the Bible in Book one of Daniel and took place in Babylon. In 600 BC, some captive children of the Israeli royal family and nobility were taken into the King Nebuchadnezzar’s service in Babylon — among them were Daniel and three friends. Supposedly, these were golden young men — physically perfect, handsome, intelligent, knowledgeable and well qualified to serve in the king’s palace.

Read the rest of this discussion piece

Is N-Hacking Ever OK? A simulation-based study (Papers: Pamela Reinagel | December 2019)0

Posted by Admin in on June 10, 2020


Another point to be made here is that linked to rejecting the notion that p<0.05 is important (significant). If you need to n-hack to achieve this then it probably is not important in any case.

After an experiment has been completed and analyzed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers incrementally grow their sample size N in an effort to achieve statistical significance. This is especially tempting in situations when samples are very costly or time-consuming to collect, such that collecting an entirely new sample larger than N (the statistically sanctioned alternative) would be prohibitive. Such post-hoc sampling or “N-hacking” is condemned, however, because it leads to an excess of false positive results. Here Monte-Carlo simulations are used to show why and how incremental sampling causes false positives, but also to challenge the claim that it necessarily produces alarmingly high false positive rates. In a parameter regime that would be representative of practice in many research fields, simulations show that the inflation of the false positive rate is modest and easily bounded. But the effect on false positive rate is only half the story. What many researchers really want to know is the effect N-hacking would have on the likelihood that a positive result is a real effect that will be replicable. This question has not been considered in the reproducibility literature. The answer depends on the effect size and the prior probability of an effect. Although in practice these values are not known, simulations show that for a wide range of values, the positive predictive value (PPV) of results obtained by N-hacking is in fact higher than that of non-incremented experiments of the same sample size and statistical power. This is because the increase in false positives is more than offset by the increase in true positives. Therefore in many situations, adding a few samples to shore up a nearly-significant result is in fact statistically beneficial. It is true that uncorrected N-hacking elevates false positives, but in some common situations this does not reduce PPV, which has not been shown previously. In conclusion, if samples are added after an initial hypothesis test this should be disclosed, and if a false positive rate is stated it should be corrected. But, contrary to widespread belief, collecting additional samples to resolve a borderline P value is not invalid, and can confer previously unappreciated advantages for efficiency and positive predictive

Reinagel, P. (2019) Is N-Hacking Ever OK? A simulation-based study. bioRxiv 2019.12.12.868489; doi:

(US) Authors questioning papers at nearly two dozen journals in wake of spider paper retraction – Retraction Watch (Adam Marcus | January 2020)0

Posted by Admin in on June 5, 2020

This case provides us with an opportunity to share two reflections: 1) Be careful when it comes to the reuse of data without explanation; and 2 the need for junior academics to check data provided by more experienced colleagues.  In this reported case, the colleague who is suspected of data manipulation has moved on to collecting data on spiders in Northern Australia.

The retraction earlier this month of a 2016 paper in the American Naturalist by Kate Laskowski and Jonathan Pruitt turns out to be the tip of what is potentially a very large iceberg.

This week, the researchers have retracted a second paper, this one in the Proceedings of the Royal Society B, for the same reasons — duplicated data without a reasonable explanation.

Dan Bolnick, the editor of the American Naturalist, tells us:

After learning about the problems in the [2016] data set, I asked an associate editor to look at data sets in other publications in the American Naturalist [on which Pruitt was a co-author] and we have indeed found what appears to be repeated data that don’t seem to have a biological explanation.

He isn’t alone. Bolnick added:

I am aware that there are concerns affecting a large number of papers at multiple other journals, and at this point I’m aware of co-authors of his who have contacted editors at 23 journals as of January 26. 


Read the rest of this discussion piece

Transatlantic editorial: Institutional investigations of ethically flawed reports in cardiothoracic surgery journals (Papers: Robert M Sade, et al | January 2020)0

Posted by Admin in on May 21, 2020

A growing body of evidence suggests that research misconduct has been rising steadily over the last few decades. The mass media have sensationalized high profile cases of scientific fraud. Several surveys have attempted to define the incidence of scientific misconduct, but the available evidence is unreliable owing mostly to underreporting of misconduct [1]. An indirect indication of the extent of research misconduct is the incidence of article retractions from the scientific literature, which is tracked by the Retraction Watch database. Among science journals the number of retractions rose from 114 in the 5-year period 1990–1994 to 10 738 in the corresponding period 2010–2014, a 94-fold increase [2]. A well-known survey of early- and mid-career scientists found that 33% said they had engaged in serious misconduct in the previous 3 years [3]. The apparent growth in misconduct may be merely an artefact of increased focus on the issue or it may be real, but the question of a recent surge is not as important as the fact that misconduct is widespread and undermines the foundation of science, which is built on honest and transparent investigation.

Ethics, Health policy, Professional affairs

Sade, R. M., Rylski, B., Swain, J. A., Entwistle, J. W. C., Ceppa, D. P. & Members of the Cardiothoracic Ethics Forum who contributed to this work, for the Cardiothoracic Ethics Forum, Transatlantic editorial (2020) Institutional investigations of ethically flawed reports in cardiothoracic surgery journals, European Journal of Cardio-Thoracic Surgery, Volume 57, Issue 4, April 2020, Pages 617–619,
Publisher (Open Access):