Journals and researchers are under fire for controversial studies using this technology. And a Nature survey reveals that many researchers in this field think there is a problem.
In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1.
While this thought-provoking piece is ostensibly about China and the US, it raises issues of significance to all researchers and institutions. Just because there’s money to be made from facial recognition algorithms and a potential publication in a high ranking journal, should we do it? These issues raise ethical considerations that may be unfamiliar for some disciplines, but they are significant and warrant attention. Just as many institutions had policies on not undertaking work on behalf of the tobacco industry, should there be a similar stance taken on facial recognition?
As a result, many researchers found it disturbing that academics had tried to build such algorithms — and that a US journal had published a research paper on the topic. And the 2018 study wasn’t the only one: journals from publishers including Springer Nature, Elsevier and the Institute of Electrical and Electronics Engineers (IEEE) had also published peer-reviewed papers that describe using facial recognition to identify Uyghurs and members of other Chinese minority groups. (Nature’s news team is editorially independent from its publisher, Springer Nature.)
- Wang, C., Zhang, Q., Liu, W., Liu, Y. & Miao, L. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9, e1278 (2019).