ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Resource Library

Research Ethics MonthlyISSN 2206-2483

AI Research is in Desperate Need of an Ethical Watchdog – Wired (Sophia Chen | September 2017)

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AI Research is in Desperate Need of an Ethical Watchdog – Wired (Sophia Chen | September 2017)

 


View full details | Go to resource


ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

The increasing number of projects such as this highlight the degree to which AI research can be an unanticipated source of harm. Currently such work is unlikely to be submitted for research ethics review and the researchers will probably be unfamiliar with ethical considerations. Existing research ethics committees and institutional arrangements will be ill equipped to be helpful, but some kind of framework is required to provide researchers feedback on the ethics of their work.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.”Kosinski has received e-mail death threats.
.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.
.

Read the rest of this discussion piece



Resources Menu

Research Integrity


Human Research Ethics