ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact text matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesResearch IntegrityGazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping – R-Bloggers (Angelika Stefan | November 2019)

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Gazing into the Abyss of P-Hacking: HARKing vs. Optional Stopping – R-Bloggers (Angelika Stefan | November 2019)

 


View full details | Go to resource


Almost all researchers have experienced the tingling feeling of suspense that arises right before they take a look at long-awaited data: Will they support their favored hypothesis? Will they yield interesting or even groundbreaking results? In a perfect world (especially one without publication bias), the cause of this suspense should be nothing else but scientific curiosity. However, the world, and specifically the incentive system in science, is not perfect. A lot of pressure rests on researchers to produce statistically significant results. For many researchers, statistical significance is the cornerstone of their academic career, so non-significant results in an important study can not only question their scientific convictions but also crash their hopes of professional promotion. (Although, fortunately things are changing for the better).

Now, what does a researcher do confronted with messy, non-significant results? According to several much-cited studies (for example John et al., 2012; Simmons et al., 2011), a common reaction is to start sampling again (and again, and again, …) in the hope that a somewhat larger sample size can boost significance. Another reaction is to wildly conduct hypothesis tests on the existing sample until at least one of them becomes significant (see for example: Simmons et al., 2011; Kerr, 1998 ). These practices, along with some others, are commonly known as p-hacking, because they are designed to drag the famous p-value right below the mark of .05 which usually indicates statistical significance. Undisputedly, p-hacking works (for a demonstration try out the p-hacker app). The two questions we want to answer in this blog post are: How does it work and why is that bad for science?

As many people may have heard, p-hacking works because it exploits a process called alpha error accumulation which is covered in most introductory statistics classes (but also easily forgotten again). Basically, alpha error accumulation means that as one conducts more and more hypothesis tests, the probability increases that one makes a wrong test decision at least once. Specifically, this wrong test decision is a false positive decision or alpha error, which means that you proclaim the existence of an effect although, in fact, there is none. Speaking in statistical terms, an alpha error occurs when the test yields a significant result although the null hypothesis (“There is no effect”) is true in the population. This means that p-hacking leads to the publication of an increased rate of false positive results, that is, studies that claim to have found an effect although, in fact, their result is just due to the randomness of the data. Such studies will never replicate.

Read the rest of this discussion piece



Related Reading

Resources Menu

Research Integrity


Human Research Ethics

0