Helping every scientist to improve is more effective than ferreting out a few frauds.
Most scientists reading this probably assume that their research-integrity office has nothing to do with them. It deals with people who cheat, right? Well, it’s not that simple: cheaters are relatively rare, but plenty of people produce imperfect, imprecise or uninterpretable results. If the quality of every scientist’s work could be made just a little better, then the aggregate impact on research integrity would be enormous.
We are huge fans of the idea that institutions should be devoting more resources and effort to resourcing reflective practice and research culture, rather than enforcement and compliance. We should continually ask ourselves the question, ” Is this resource/investment Burton Crescent practice about catching cheats?” That’s why we’re not fans of Australian RIAs having a mandatory obligation to report misconduct. Anything that might dissuade researchers from seeking advice from their local RIA is counter to what we should be trying to achieve. We want researchers to be asking their RIA the collegiate question, “Is this good practice?” Not worrying whether the RIA will report them.
Over the past 2 years, some 20 institutions in the United Kingdom have joined the UK Reproducibility Network (UKRN), a consortium that promotes best practice in research. They have created senior administrative roles to improve research and research integrity. I have taken on this job (on top of my research on evaluating stroke treatments) at the University of Edinburgh. Since then, I’ve focused on research improvement rather than researcher accountability. Of course, deliberate fraud should be punished, but a focus on investigating individuals will discourage people from acknowledging mistakes, and mean that opportunities for systems to improve are neglected.
At the University of Edinburgh, we have audits as part of projects to shrink bias in animal research, speed up publication and improve clinical-trial reporting. These are not the metrics that most researchers are used to. Many people are initially wary of yet another ‘external imposition’, but when they see that this is about promoting our own community’s standards — and that there are no extra forms to fill in — they usually welcome this shift in institutional focus.