With several colleagues, I have recently stumbled into investigating what we call ‘untrustworthy’ data in pain. The story started when we were updating a systematic review and meta-analysis of psychological interventions for chronic pain.1 Three of the 70+ eligible papers had results that were staggeringly better than anyone else’s, by an order of magnitude. The same team had produced all three papers. Either they had discovered spectacularly effective ways of delivering CBT and exercise to people with musculoskeletal (spinal) pain, in which case it was urgent that we all learned from the trials, or there was a problem with their data.
This editorial, published in July 2023, discusses how a systemic review led a group of researchers to doubt the results of a few clinical trials in the pain management, the steps they took, the retractions and the unsatisfactory response by some publications/investigators. This provides a useful case study of how researchers should act if they are suspicious of clinical trial results. It also highlights how publications ‘can walk the talk’ when it comes to the COPE code.
We systematically searched for their RCTs on physical and/or psychological interventions for spinal pain and found 10 trials. We ran these through a risk-of-bias tool, which turned up little, mostly because information was missing. Then, we applied the Cochrane Pregnancy and Childbirth review group’s Trustworthiness Screening Tool, developed for routine use by this group on trials eligible for meta-analysis. This tool checks for features of good practice, such as trial preregistration, and publicly available ethics application, and also examines feasibility and distributions of data, from baselines and from tests. This generated concerns about eight of the ten trials, such as identical data at baseline across trials, zero attrition and all changes extraordinarily large. We published our findings.2
We then approached the authors of the six journals that had published these trials (see3) with a copy of the published paper,2 expressing concern. Three of the journals instigated investigations consistent with the COPE (Committee on Publishing Ethics) guidelines they endorsed (as does the British Journal of Pain). This resulted in two retractions by journals and one by the trial authors. Of the other three, one (which had published four of the papers concerned) wrote to the first author, were told he was unavailable, and decided to take it no further; the two others appeared to find it distasteful that we had raised the subject, implying that we were behaving unprofessionally, and took the first author’s assurances at face value. One of those has since reconsidered and retracted the paper; the other (though fully signed up to COPE) preferred resolution by ‘academic debate’, as if authenticity of data is a matter of personal preference. We declined.
Williams AC de C. Dubious data and contamination of the research literature on pain. British Journal of Pain. 2023;0(0). doi:10.1177/20494637231190866
Free Access: https://journals.sagepub.com/doi/10.1177/20494637231190866
