Editors are among the most powerful actors in the scientific community. By deciding which papers (not) to publish, they can influence public discourse and nurture – or obstruct – academic careers. However, there is little available information about aggregate patterns of scholarly journal editorships. This may change soon, as Andreas Nishikawa-Pacher writes, thanks to a novel dataset created in collaboration with Kerstin Shoch and Tamara Heck that provides new insights into the landscape of journal editing.
Editors play such a key role in scholarly publication. Their decisions can shape the trajectory and speed of academic careers, can open up or close down lines of inquiry and they can prune out papers purchased from paper mills, they can block publication of unethical work (e.g. research that uses organs from prisoners research the service racist theories). But they are human and they are sometimes flawed. There is evidence to suggest this position is equating to decisions that favour former students, and that are sexist and racist. This London School of Economics blog, and the research/data it reports takes a dive into details of what’s going on. The data collection was amateur and will be hard to maintain but it does point to the value of maintaining such a dataset.
data about editors are not “closed” – journals usually list them on their websites – neither are they “open”
Such stories about scientific gatekeepers, however, often remain anecdotal, or the evidence remains limited to single-case studies, to specific sub-disciplines, to a narrow range of journals. The aggregate extent of such patterns across the wider scientific system remains unknown. Ideally, one could uncover such potentially unethical activities with large-scale data about editorial boards in a highly structured format. Names and ORCID and affiliations could then be connected en masse to broad publication patterns to detect anomalies. However, such “editormetric” investigations can hardly be conducted. While data about editors are not “closed” – journals usually list them on their websites – neither are they “open” in the sense that approximates the FAIR principles of open data: they are not trivially findable (F), accessible (A), interoperable (I) and re-useable (R) on a grand scale. Instead, they are scattered across tens of thousands of journal websites in different formats so that one would have to collect the data manually – a dauntingly laborious, time-consuming task.