Devices that can record and change brain activity will create privacy issues that challenge existing human-rights legislation, say researchers.
Scientific advances are rapidly making science-fiction concepts such as mind-reading a reality — and raising thorny questions for ethicists, who are considering how to regulate brain-reading techniques to protect human rights such as privacy.
Not so long ago, the idea that machines could read and share people’s thoughts would seem pure science fiction and incredibly remote from the current day. This may no longer be the case. The technology for machines reading minds is getting closer and closer to reality right now. We don’t currently have an ethical framework for the use of such technology, but we will need it very soon. Information technology is not an area where there has been much work in terms of ethical frameworks, but AI, technology like this and autonomous weapons are making the need for such frameworks urgent.
How to regulate neurotechnology “is not a technological discussion — it’s a societal one, it’s a legal one”, Gabriela Ramos, UNESCO’s assistant director-general for social and human sciences, told the meeting.
Advances in neurotechnology include a neuroimaging technique that can decode the contents of people’s thoughts, and implanted brain–computer interfaces (BCIs) that can convert people’s thoughts of handwriting into text1.
The field is growing fast — UNESCO’s latest report on neurotechnology, released at the meeting, showed that, worldwide, the number of neurotechnology-related patents filed annually doubled between 2015 and 2020. Investment rose 22-fold between 2010 and 2020, the report says, and neurotechnology is now a US$33-billion industry.