Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Evaluative practice/quality assurance
First People
Fraud
Gender
Genetics
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
Young people
Exclude news

Sort by

Human Research Ethics Research Integrity

Is N-Hacking Ever OK? A simulation-based study (Papers: Pamela Reinagel | December 2019)

Posted by saviorteam in Research Integrity on June 10, 2020
Keywords: Analysis, Publication ethics, Research integrity, Research results, Researcher responsibilities

The Linked Original Item was Posted On December, 19 2019

Question mark viewed on a smartphone with arrows point out in all directions

Abstract

Another point to be made here is that linked to rejecting the notion that p<0.05 is important (significant). If you need to n-hack to achieve this then it probably is not important in any case.

After an experiment has been completed and analyzed, a trend may be observed that is “not quite significant”. Sometimes in this situation, researchers incrementally grow their sample size N in an effort to achieve statistical significance. This is especially tempting in situations when samples are very costly or time-consuming to collect, such that collecting an entirely new sample larger than N (the statistically sanctioned alternative) would be prohibitive. Such post-hoc sampling or “N-hacking” is condemned, however, because it leads to an excess of false positive results. Here Monte-Carlo simulations are used to show why and how incremental sampling causes false positives, but also to challenge the claim that it necessarily produces alarmingly high false positive rates. In a parameter regime that would be representative of practice in many research fields, simulations show that the inflation of the false positive rate is modest and easily bounded. But the effect on false positive rate is only half the story. What many researchers really want to know is the effect N-hacking would have on the likelihood that a positive result is a real effect that will be replicable. This question has not been considered in the reproducibility literature. The answer depends on the effect size and the prior probability of an effect. Although in practice these values are not known, simulations show that for a wide range of values, the positive predictive value (PPV) of results obtained by N-hacking is in fact higher than that of non-incremented experiments of the same sample size and statistical power. This is because the increase in false positives is more than offset by the increase in true positives. Therefore in many situations, adding a few samples to shore up a nearly-significant result is in fact statistically beneficial. It is true that uncorrected N-hacking elevates false positives, but in some common situations this does not reduce PPV, which has not been shown previously. In conclusion, if samples are added after an initial hypothesis test this should be disclosed, and if a false positive rate is stated it should be corrected. But, contrary to widespread belief, collecting additional samples to resolve a borderline P value is not invalid, and can confer previously unappreciated advantages for efficiency and positive predictive
.

Reinagel, P. (2019) Is N-Hacking Ever OK? A simulation-based study. bioRxiv 2019.12.12.868489; doi: https://doi.org/10.1101/2019.12.12.868489

Related Reading

No related Resources found

Related Links

  • About the contributors
  • About the keywords
  • Suggest a resource
  • Report problem/broken link
  • Request a Take Down

Compiled here are links, downloads and other resources relating to research integrity and human research ethics. more…

Resources Menu

Four hands solving a jigsaw against the sun blazing out of a cloudy sky

Research Integrity

  • Codes, guidelines, policies and standards
  • Guidance and resource material
  • Papers
  • Books
  • In the news

Human Research Ethics

  • Codes, guidelines, policies and standards
  • Guidance and resource material
  • Papers
  • Books
  • In the news

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Home
  • Services
  • About Us
  • Contact Us
Menu
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Disclaimer
Menu
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Disclaimer
  • Support
  • Contact Us
  • Site Map
Menu
  • Support
  • Contact Us
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f
Twitter
Linkedin-in