Skip to content

ACN - 101321555 | ABN - 39101321555

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

AHRECS icon
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Menu
  • Home
  • About Us
    • Consultants
    • Services
  • Previous Projects
  • Blog
  • Resources
  • Feeds
  • Contact Us
  • More
    • Request a Quote
    • Susbcribe to REM
    • Subscribe to VIP
Exclude terms...
Aboriginal and Torres Strait Islander
AHRECS
Analysis
Animal ethics
Animal Ethics Committee
Animal handling
Animal housing
Animal Research Ethics
Animal Welfare
ANZCCART
Artificial Intelligence
Arts
Australia
Authorship
Belief
Beneficence
Big data
Big data
Biobank
Bioethics
Biomedical
Biospecimens
Breaches
Cartoon/Funny
Case studies
Clinical trial
Collaborative research
Conflicts of interest
Consent
Controversy/Scandal
Controversy/Scandal
Creative
Culture
Data management
Database
Dual-use
Essential Reading
Ethical review
Ethnography
Euthanasia
Evaluative practice/quality assurance
Even though i
First People
Fraud
Gender
Genetics
Get off Gary Play man of the dog
Good practice
Guidance
Honesty
HREC
Human research ethics
Humanities
Institutional responsibilities
International
Journal
Justice
Links
Media
Medical research
Merit and integrity
Methodology
Monitoring
New Zealand
News
Online research
Peer review
Performance
Primary materials
Principles
Privacy
Protection for participants
Psychology
Publication ethics
Questionable Publishers
Research ethics committees
Research integrity
Research Misconduct
Research results
Researcher responsibilities
Resources
Respect for persons
Sample paperwork
sd
se
Serious Adverse Event
Social Science
SoTL
Standards
Supervision
Training
Vulnerability
x
Young people
Exclude news

Sort by

Animal Ethics Biosafety Human Research Ethics Research Integrity

AI Research is in Desperate Need of an Ethical Watchdog – Wired (Sophia Chen | September 2017)

Posted by saviorteam in Human Research Ethics on December 5, 2017
Keywords: Controversy/Scandal, Ethical review, Good practice, Human research ethics, International, Merit and integrity, News, Online research, Research ethics committees, Research results
A world map with global hotspots marked with arcs linking them

ABOUT A WEEK ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training it with tens of thousands of photographs from dating sites, the algorithm could perform better than a human judge in specific instances. For example, when given photographs of a gay white man and a straight white man taken from dating sites, the algorithm could guess which one was gay more accurately than actual people participating in the study.* The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.

The increasing number of projects such as this highlight the degree to which AI research can be an unanticipated source of harm. Currently such work is unlikely to be submitted for research ethics review and the researchers will probably be unfamiliar with ethical considerations. Existing research ethics committees and institutional arrangements will be ill equipped to be helpful, but some kind of framework is required to provide researchers feedback on the ethics of their work.

Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.”Kosinski has received e-mail death threats.
.

But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data & Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.
.

Read the rest of this discussion piece

Related Reading

No Related Readings Found!

Related Links

Complaints against Research Ethics Monthly

Request a Takedown

Submission Guidelines

About the Research Ethics Monthly

About subscribing to the Research Ethics Monthly

A diverse group discussing a topic

Random selected image from the AHRECS library. These were all purchased from iStockPhoto. These are images we use in our workshops and Dr Allen used in the GUREM.

Research Ethics Monthly Receive copies of the Research Ethics Monthly directly
by email. We will never spam you.

  • Enter the answer as a word
  • Hidden
    This field is hidden and only used for import to Mailchimp
  • This field is for validation purposes and should be left unchanged.
  • Home
  • Services
  • About Us
  • Contact Us
  • Home
  • Services
  • About Us
  • Contact Us
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Company
  • Terms Of Use
  • Copyright
  • Privacy Policy
  • Site Map
  • Site Map

Australasian Human Research Ethics Consultancy Services Pty Ltd (AHRECS)

Facebook-f Twitter Linkedin-in