ACN - 101321555 Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)
Search
Generic filters
Exact matches only
Search into
Filter by Categories
Research integrity
Filter by Categories
Human Research Ethics

Resource Library

Research Ethics MonthlyAbout Us

ResourcesHuman Research EthicsThe battle for ethical AI at the world’s biggest machine-learning conference – Nature (Elizabeth Gibney | January 2020)

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

The battle for ethical AI at the world’s biggest machine-learning conference – Nature (Elizabeth Gibney | January 2020)

 


View full details | Go to resource


Bias and the prospect of societal harm increasingly plague artificial-intelligence research — but it’s not clear who should be on the lookout for these problems.

Diversity and inclusion took centre stage at one of the world’s major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month’s Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics.

If your institution is involved in AI, algorithm or big data research, who advises on its ethical dimensions?   Given the potential for societal harm, perhaps it’s time for serious consideration of the need for research ethics review for such work.

The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies — such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. “There is no such thing as a neutral tech platform,” warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs. At the meeting, which hosted a record 13,000 attendees, researchers grappled with how to meaningfully address the ethical and societal implications of their work.
.

Ethics gap
Ethicists have long debated the impacts of AI and sought ways to use the technology for good, such as in health care. But researchers are now realizing that they need to embed ethics into the formulation of their research and understand the potential harms of algorithmic injustice, says Meredith Whittaker, an AI researcher at New York University and founder of the AI Now Institute, which seeks to understand the social implications of the technology. At the latest NeurIPS, researchers couldn’t “write, talk or think” about these systems without considering possible social harms, she says. “The question is, will the change in the conversation result in the structural change we need to actually ensure these systems don’t cause harm?”
.

Read the rest of this discussion piece



Resources Menu

Research Integrity


Human Research Ethics

0