It’s a well-known problem with clinical trials: Researchers start out saying they will look for a particular outcome—heart attacks, for example—but then report something else when they publish their results. That practice can make a drug or treatment look like it’s safer or more effective than it actually is. Now, a systematic effort to find out whether major journals are complying with their own pledge to ensure that outcomes are reported correctly has found many are falling down on the job—and both journals and authors are full of excuses.
To accompany this report in Science on the troubling findings by the COMPare team, we have included links to 23 news reports and papers on the conduct and reporting of clinical trials.
Starting 4 years ago, his team’s Centre for Evidence-Based Medicine Outcome Monitoring Project (COMPare) project examined all trials published over 6 weeks in five journals: Annals of Internal Medicine, The BMJ, JAMA, The Lancet, and The New England Journal of Medicine (NEJM). The study topics ranged from the health effects of drinking alcohol for diabetics to a comparison of two kidney cancer drugs. All five journals have endorsed long-established Consolidated Standards of Reporting Trials (CONSORT) guidelines. One CONSORT rule is that authors should describe the outcomes they plan to study before a trial starts and stick to that list when they publish the trial.