Mutual criticism in research is necessary, but it needn’t be nasty
One unanticipated consequence of the current pandemic is that many scientists are cutting one another some slack. Journal editors have become more relaxed about deadlines, funding agencies are granting extensions with little or no explanation needed, and universities have given graduate students more time—and in some cases even more funding—to finish their Ph.D.s. And many scholars are working productively from home relieved of the pressure to appear in person when the work does not actually require it. In a piece in Nature last April, Gemma Derrick, a senior lecturer in higher education at Lancaster University in England, wondered whether in the future such kindness might not be sustained. Derrick’s scholarly work “focuses on building a kinder, gentler, more inclusive research culture by modifying one of its harshest processes, peer review,” and she proposes we use the “momentum of COVID-19” to “firmly embed kindness into research practice.”
Our approach to peer review and supervision/mentoring needs to be kinder, not only because these are the qualities we need in science, but because it makes for more constructive collaborations.
Being kind is viewed as secondary to being successful. One scientist I knew when I was an assistant professor told me that after she got tenure, she had to “learn how to be nice again.” (To her credit, she was by then actively engaged in mentoring younger women, myself included.) If we want to nurture talent, particularly among those who have been historically underrepresented in science and may therefore feel uncertain as to their place in the endeavor, it behooves us to consistently treat students and co-workers with dignity and respect. But research practice also refers to how we evaluate scientific claims at workshops and conferences and how we evaluate grant proposals or act as reviewers for papers submitted to professional journals. Here things get trickier. Because how do we know if a scientific claim is right? How do we know whether the methods a group has used are reasonable and have been applied with rigor? How do we know if the conceptualization behind a model reasonably reflects the real world?