ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

ResourcesBreaches

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Meta-analysis study indicates we publish more positive results – ARS Technica (John Timmer | December 2019)0

Posted by Admin in on January 13, 2020
 

Meta-analyses will only produce more reliable results if the studies are good.

While science as a whole has produced remarkably reliable answers to a lot of questions, it does so despite the fact that any individual study may not be reliable. Issues like small errors on the part of researchers, unidentified problems with materials or equipment, or the tendency to publish positive answers can alter the results of a single paper. But collectively, through multiple studies, science as a whole inches towards an understanding of the underlying reality.

Similar findings have been found before, but it’s important to rearticulate the value of negative results to science and practice.  This speaks to poor research culture and training. University education, and even high and primary school, do not acknowledge that failure is part of discovery. The rewards for ‘success’ are high and it is very tempting for students that can lead to research misconduct.

A meta-analysis is a way to formalize that process. It takes the results of multiple studies and combines them, increasing the statistical power of the analysis. This may cause exciting results seen in a few small studies to vanish into statistical noise, or it can tease out a weak effect that’s completely lost in more limited studies.
.

But a meta-analysis only works its magic if the underlying data is solid. And a new study that looks at multiple meta-analyses (a meta-meta-analysis?) suggests that one of those factors—our tendency to publish results that support hypotheses—is making the underlying data less solid than we like.

Publication bias

It’s possible for publication bias to be a form of research misconduct. If a researcher is convinced of their hypothesis, they might actively avoid publishing any results that would undercut their own ideas. But there’s plenty of other ways for publication bias to set in. Researchers who find a weak effect might hold off on publishing in the hope that further research would be more convincing. Journals also have a tendency to favor publishing positive results—one where a hypothesis is confirmed—and avoid publishing studies that don’t see any effect at all. Researchers, being aware of this, might adjust the publications they submit accordingly.

.

Read the rest of this discussion piece

(Queensland, Australia) Ex-judge to investigate controversial marine research – Times Higher Education (John Ross | January 2020)0

Posted by Admin in on January 11, 2020
 

An Australian university has launched an investigation into the research record of a discredited scientist it educated, as findings by academics who supervised her doctoral training are challenged.

James Cook University said it has appointed an external panel to look for evidence of misconduct in the research conducted by marine biologist Oona Lönnstedt between 2010 and 2014, when she was undertaking PhD studies at the Queensland institution.

The university said the panel’s as yet unidentified members include “eminent academics with expertise in field work, marine science and ethics” and a former federal court judge.

Read the rest of this news story

(China) Academic misconduct standards to be tightened – China Daily Global (Li Yan | October 2019)0

Posted by Admin in on January 2, 2020
 

China has strengthened its fight against academic misconduct by publishing new standards defining plagiarism, fabrication, falsification and other violations of research integrity. Experts believe the clarity will make it easier to discipline researchers who violate the rules.

The document, issued by the Ministry of Science and Technology, has been adopted by 20 government agencies ranging from China’s Supreme People’s Court to the Chinese Academy of Sciences.

Depending on the severity of the offense, punishments can range from canceling a project’s funding to revoking the offender’s titles and permanently banning them from promotion or other research positions. Institutes that connive with or shield violators will also be punished with budget cuts or judicial action.

Read the rest of this news item

Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping – R-Bloggers (Angelika Stefan | November 2019)0

Posted by Admin in on December 26, 2019
 

Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).

Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as p-hacking, because they are designed to drag the famous p-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, p-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?

As many people may have heard, p-hacking works because it exploits a process called alpha error accumulation which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or alpha error, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that p-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.

Read the rest of this discussion piece

0