In its discussion about the features of the responsible practice of research national research integrity codes like the Australian Code for the Responsible Conduct of Research mention honesty and integrity in research outputs but don’t provide much further guidance. This discussion piece by Geoff Cumming (La Trobe University) does explore this further – albeit within the frame of statistical significance.
.
A false positive is a claim that an effect exists when in actuality it doesn’t. No one knows what proportion of published papers contain such incorrect or overstated results, but there are signs that the proportion is not small.
.
The epidemiologist John Ioannidis gave the best explanation for this phenomenon in a famous paper in 2005, provocatively titled “Why most published research results are false”. One of the reasons Ioannidis gave for so many false results has come to be called “p hacking”, which arises from the pressure researchers feel to achieve statistical significance.
.
What is statistical significance?
.
To draw conclusions from data, researchers usually rely on significance testing. In simple terms, this means calculating the “p value”, which is the probability of results like ours if there really is no effect. If the p value is sufficiently small, the result is declared to be statistically significant.
One reason so many scientific studies may be wrong – The Conversation (Geoff Cumming October 2016)
Posted by saviorteam in Research Integrity on November 28, 2016
Keywords: Analysis, Breaches, Guidance, Honesty, News, Research integrity, Research Misconduct, Research results, Researcher responsibilities
Keywords: Analysis, Breaches, Guidance, Honesty, News, Research integrity, Research Misconduct, Research results, Researcher responsibilities
Related Reading
No Related Readings Found!
Related Links

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.