Editor’s Note: Today’s post is by Daniel S. Katz and Hollydawn Murray. Dan leads the FORCE11 Software Citation Implementation Working Group’s journals task force, and is the Chief Scientist at NCSA and Research Associate Professor in Computer Science, Electrical and Computer Engineering, and the School of Information Sciences at the University of Illinois. Until recently, Holly was the Head of Data and Software Publishing at F1000Research where she led on data and software sharing policy and strategy. She currently works as Research Manager at Health Data Research UK. The work discussed here is supported by representatives and editors from: AAS Journals, AGU Journals, American Meteorological Society, Crossref, DataCite, eLife, Elsevier, F1000Research, GigaScience Press, Hindawi, IEEE Publications, Journal of Open Research Software (Ubiquity Press), Journal of Open Source Software (Open Journals), Oxford University Press, PLOS, Science Magazine, Springer Nature, Taylor & Francis, and Wiley
Irrespective of whether you feel there is a reproducibility crisis, good scientific practice is to clearly explain how you conduct an experiment/research project, to increase the chances your work can be reproduced. This Scholarly Kitchen piece suggests a positive approach. A useful discussion for researches of all experience levels.
In March 2020, Fergusson et al. published a report that used software to model the impact of non-pharmaceutical interventions in reducing COVID-19 mortality and healthcare demand. This report, and others like it, were widely discussed and appeared to have had a quick impact on government policies. But almost as quickly, both political and scientific criticism started. The scientific criticism included questions about the modeling, such as the parameters and methodology used, which could not be easily addressed since the code was not shared in conjunction with the publication.