ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyAbout Us

Metrics

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

We would all benefit from more research integrity research1

 

Paul M Taylor1 and Daniel P Barr2

1Director, Research Integrity, Governance and Systems
Research and Innovation, RMIT University (paul.taylor@rmit.edu.au)

2Acting Director, Office for Research Ethics and Integrity
Research, Innovation and Commercialisation, The University of Melbourne (dpbarr@unimelb.edu.au)

We need more research into research integrity, research misconduct and peer review. This is not a controversial statement, and few would argue against it. So, this is a short blog post then…

It’s worth thinking about why we think that more research into these areas is important and needed. The research that has been reported in the literature is valuable to us and has produced some fascinating insights. We see differences in attitudes in different countries and career stages, and evidence about the impacts of research misconduct. Like all good research, the material already in the literature prompts us to ask more questions than it answers.

But, do we think that the same surveys about the incidence of research misconduct or attitudes to research integrity would reveal the same results for humanities and social science researchers as those in STEM disciplines? Are biomedical researchers in Australia or the UK as likely or more likely to commit research misconduct? Do RCR training packages help prevent misconduct? Is this even what we want RCR training to do? How do we best design and implement research integrity policies? Are principles really better than rules in this context? There’s a handful of grant applications right there!

Perhaps a research integrity ecosystem view would help. What are the challenges that some of the key stakeholders in research integrity are facing and how could research help?

We can start close to home by thinking about the role of institutions in research integrity. The most obvious role of institutions in this area is in responding to allegations of research misconduct. This role is entirely reasonable because of the nature of the relationship between researchers and their workplaces – employment contracts can compel people to provide evidence, and institutions may have better access to data and records that can make the difference in allegations being properly resolved. Certainly compared to other players, institutions are in the best position to consider concerns about the integrity of research. We know that there is not uniformity though in the way institutions respond. Our friends at COPE have talked about the difficulty that publishers face in sometimes even identifying a place to direct concerns. What’s the opportunity for research here? Analysis of institutions to identify traits that are found in ‘good responders’ would help those institutions trying to improve their operations in this area. How critical is the role of senior leadership? What are the impacts, at an institutional level, of a high profile or public misconduct case? How does this impact differ for highly-ranked, ‘too big to fall’ institutions compared with younger organisations? What are the factors that people see that makes them think an institution produces responsible and trustworthy research (if the institution plays that much of a role at all)?

This leads to a second and equally important role for institutions in promoting the importance of responsible and ethical research. It extends way beyond compliance (although this is obviously important). The products of research, as many and varied as they are, must be trustworthy because of the positive impacts that we all hope research will have. So, if an institution decided it wanted to revamp its research governance framework or Code of Conduct for Research, what should it focus on? What evidence do we have, in the research context, to support the idea of Codes of Conduct? Are high-level, principles-based documents that cover most research disciplines useful or are more discipline-focussed rules-based governance structures more effective? How do institutions best engender a strong culture of research integrity?

The role of training here is intuitive and probably right, but can we show that this makes a difference and results in more trustworthy, higher quality research, or does it just make us feel better? Publishers and funders too could benefit from the added insights that research would reveal. Perhaps for both of these players, understanding better the pitfalls of peer review, or development of rigorous alternative models? Research into peer review is already happening, but there could and should be more. What is the best way to distribute mostly decreasing pools of funds to highly competitive funding applicants? How consistent is the decision-making of grant review panels or journal editors? How influential are locations or institutions and ‘big names’ on manuscript or grant review processes and should all reviews be double-blind? Decisions based on peer review are intrinsic and integral to the research process. We should thoroughly understand how these processes are working and what we should do to try and make them work better.

The final group to talk about here are the researchers themselves, perhaps the most important part of the research integrity ecosystem. Given an opportunity, most researchers enjoy talking about the way research works and their own research practice. Listening to conversations between microbiologists and historians about publication rates and funding challenges, data generation and curation, and team research or sole-trader models is intriguing and very interesting. Research about attitudes towards research integrity and how it fits (or doesn’t fit) the way researchers do their research would be valuable. Fundamentally, researchers critically assess new or existing information to find new ideas or solutions. It should come as no surprise when the same critical assessment is applied to proposals for them to reconsider the way they do their research. ‘Research integrity research’ would help to support changes in behaviour that increase the trustworthiness and quality of research. This is really the goal of research integrity.

There’s no shortage of questions to answer. There’s growing awareness of research integrity as a discipline in it’s own right (perhaps it the ultimate interdisciplinary research area). There’s new places for this research to be found (like Research Integrity and Peer Review). The benefits are compelling and clear. What are we waiting for? *Paul is a member of the Editorial Board of Research Integrity and Peer Review. Aside from that, neither Paul nor Dan have any conflicts of interest to disclose, but they hope to in the near future.

This blog may be cited as:
Taylor P and Barr DP. (2016, 10 May) We would all benefit from more research integrity research. Research Ethics Monthly. Retrieved from https://ahrecs.com/research-integrity/benefit-research-integrity-research

Is the sky falling? Trust in academic research in 20152

 

For anyone that has been paying even the slightest attention to scholarly publishing over the past few years, it will have been impossible to ignore what seems to be a growing number of astonishing advances published in prestigious journals presented at press conferences by proud scientists, which is then followed by questioning of said findings first on twitter, then on blogs, then in newspapers, with finally the very same scientists facing up to the same media, but this time to have to report that their findings were not correct, maybe even fabricated. Corrections follow, sometimes quickly, sometimes slowly, of whole or part of the published research. Those outside academia wonder what is going on.

In the background it might actually seem that the issue is worse. For every dramatic case that hits the headlines, there are many more where researchers only make their findings partially available or when asked can’t find or make available to others the data that underlie their findings – not because of fraud or fabrication but because of sloppiness, or poor training, or simply a lack of proper structures in place around the research.

What’s going on? Underlying it all is the often poorly appreciated fact that academic advances (especially in science) rarely, if ever, advance in clear quantum leaps. More often research findings are messy and incremental. Yet despite this fact, current ways of measuring academics and academic institutions incentivise – even require – academics to compete for publication in highly selective journals and punish those that don’t, and thus reward behaviour that fits with this system. This issue was acknowledged explicitly by the UK Nuffield Council on Bioethics in their report, the Culture of Scientific Research in the UK which noted that the “‘pressure to publish’ can encourage the fabrication of data, altering, omitting or manipulating data, or ‘cherry picking’ results to report.”

However, the good news is that reform is in the air about how science is assessed and viewed. This reform is  partly derived from external pressures resulting from the high profile cases, but more constructively, and probably sustainably, arise from the many conversations circulating over the past several years among academics and more enlightened publishers, policy makers and funders.

Such initiatives have started with an increasing understanding that measuring worth and rewarding tenure on the basis primarily of a single, commercial, measure of journals’ (and by implication scientists’) worth – the Thomson Reuters journal impact factor is now out-dated (if it was ever valid). An important element of the change is the technical development of practical alternatives such as new article level and alternative metrics, which aim to measure multiple different ways of impact, (e.g. those from PLOS, Impact Story, Altmetic). Crucially, these technical developments are now increasingly backed by international agreement that change is needed, highlighted by DORA, and the UK’s HEFCE.

Other initiatives, such as governments’ (including the Australian Government’s) interest in wider societal impact and especially business competiveness – none of which seem to be well predicted by current journal-level metrics – could, and probably should, also lead to an unpicking of the dominance of older metrics. Equally important however, is the culture of openness that is now increasingly permeating academia, which includes open access to research but more crucially in this context also openness to the research process itself, including to the processes and underlying data. And all of this feeds into another increasingly importantly concept, that of transparency in reporting and reproducibility, which can counteract waste in research and the changes needed for that.

So we are at a time of great change, when the technology that supports open availability of data and publications, new methods of research and academic assessment and a prioritizing of reproducibility are all moving to a research system that has the potential to better support society’s needs. How quickly these opportunities are all taken up remains to be seen – and points to the harder challenge – that of changing the mindset of individuals and institutions.

Dr Virginia Barbour, COPE Chair
Brisbane, Australia
email: cope_chair@publicationethics.org
web: http://publicationethics.org/

These comments reflect my personal opinions and not necessarily those of COPE or my employers

This blog may be cited as:
Barbour, V (2015, 26 July) Is the sky falling? Trust in academic research in 2015. AHRECS Blog. Retrieved from https://ahrecs.com/research-integrity/is-the-sky-falling-trust-in-academic-research-in-2015

0