This week is Peer Review Week, the slightly more popular academic celebration than pier review week. Peer review is an essential part of scientific publication and is – like Churchill’s democracy – the worst system to do it. Except for all of the others. The reason it’s imperfect is mainly that it’s done by people, so there is a natural desire to try to improve it.
One suggestion for improvement is to us double blind reviews. At the moment most journals (including Methods in Ecology and Evolution) use single blind reviewing, where the author isn’t told the identity of the reviewers. The obvious question is whether double blind reviewing does actually improve reviews: does it reduce bias, or improve quality? There have been several studies in several disciplines which have looked at this and related questions. After having looked at them, my summary is that double blind reviewing is fairly popular, but makes little or no difference to the quality of the reviews, and reviewers can often identify the authors of the papers.
Does Double Blind Reviewing Improve Review Quality?
I found 13 studies about this (some with replies and re-analyses). Overall, there is little effect of double blinding on quality: in some studies the recommendations to the editor were more negative (i.e. more likely to recommend rejection or major revisions), but of the five studies that looked at quality (mostly by judgment of authors or editors), three found no effect, ad the other two found opposite effects – one with double-blinding higher, one with it lower.