Ramping up brain chip regulations

Protecting our freedom of thought from computer brain interface technologies...
13 February 2024

Interview with 

Marcello Ienca, EPFL

ARTIFICIAL_INTELLIGENCE

A robotic-looking woman's face behind a wall of computer code.

Share

While many are sceptical of Neuralink’s long term ambitions, many argue it’s important we should be prepared for them in any case.

If the company is able to somehow combine the efficiency of our brain with powerful artificial intelligence software, for example, then we need to make sure we’re ready for its impact on society. And what’s to stop the companies who control the software inside these chips collecting our thoughts for their own ends?

Marcello Ienca is a cognitive scientist who focuses on the ethical and policy implications of neurotechnology…

Marcello - Human enhancement is something that is inherent in human beings. We tend to change over time and we use technology to improve our function. If you're wearing eyeglasses, you are improving your capacities. The same goes for wearing shoes: if we wear shoes, we can run faster than if we're barefoot. Human enhancement is not necessarily something that is morally problematic. It's something that becomes morally problematic if it goes beyond or is in violation of fundamental ethical principles. The concern I have is to not to live in a world where people can boost their memory or their other executive functions - I think that would be actually quite desirable - but I think we have to be cautious on the equality side, and probably we should also have a discussion about what human abilities are more legitimate to improve and which ones are probably not desirable.

James - Are we talking here about these dystopian possibilities where there could be an invasion of our mental privacy? The people who run the software that we're using in these brain computer interfaces could have access to our thoughts and potentially even change them. Is that a realistic concern?

Marcello - I do think it is a realistic concern. Neurotechnologies can be used to reveal or predictively infer information from the human brain about mental states including thoughts and emotions. This is something that can be done in two ways. In a broad sense we are already doing this because we can leverage the data collected by neurotechnologies and mine them using artificial intelligence algorithms to reveal privacy sensitive information - even from seemingly non-private data - by establishing statistical correlations. But what will be possible, in my view, relatively soon, is mind reading in the strong sense of brain decoding. We can use machine learning models to reconstruct the visual and semantic content of mental states from brain activity. This is something that, in the last couple of years, large language models like GPT, that ChatGPT is based on, also appear to be extremely useful to achieve that goal.

And again, this is not something dangerous per se. We need to read the mind in order to make the lives of people better. For example, there are people with locked-in syndrome or severe paralysis and, if we can read their thoughts, we can allow them to restore an interaction with their external world and that's a moral imperative in my view. But at the same time, in the big data economy we live in, then also companies will have access to potentially sensitive information about peoples' minds. I think this is a real concern because the ability to seclude information from our mind is a necessary requirement for freedom of thought.

James - The mind boggles as to what the nefarious actors of the world could potentially do with this sort of power, authoritarian governments, for example. These risks, what can we do to mitigate them?

Marcello - The brain is the most complex entity in the universe. Therefore, regulating the brain will not involve easy solutions. Definitely we should make sure that companies in these fields make their business ethical by design. I'm glad to say that a lot of neurotechnology companies are establishing ethics advisory committees, or they are developing ethical guidelines that they operationalise in their company. Unfortunately, Neuralink is not one of those. On another level, I think we need to clarify what brain data really are and where they should be located in data protection regulation. Currently, it's quite a puzzle. They're definitely health data as in the European general data protection regulation, but currently they're not classified as a special category of health data unlike, for example, genetic data. Genetic data can probably teach us a lot about how brain data should be regulated because DNA and information in the brain have a lot of similar characteristics. They're both predictive of present and future health status and behaviour, they are informationally rich. Brain data also have additional temporal resolution, so I think that regulation should catch up with that. Then we have the level of fundamental human rights. I have introduced the notion of neuro rights together with Roberto Andorno and I think in the next few years we'll see a lot of regulatory developments in this field.

James - That was going to be my final question. It's obviously such an emerging field based on a very newly emerging technology. The risks you say are not as far down the track as maybe many of our listeners will think. Are we going to be prepared to face them when they come?

Marcello - I think so. The entire history of technology is a history of dealing with dangers. Anytime a new technology comes along, there are a great deal of opportunities, but also a great deal of dangers. The more powerful the technology, the greater the danger. Artificial intelligence and neurotechnology are extremely powerful technologies that will probably help millions of people worldwide. I think here I should emphasise that ethics is not just about preventing harm, it's also about promoting good. We currently live in a world where hundreds of millions of people suffer from disorders of the brain and mind. We can't cure what we can't understand. We need to develop technologies that can help us read and understand the brain and also modulate brain function in order to alleviate this major burden of disease caused by neuropsychiatric disorders. But at the same time, we have to make sure that this technological innovation occurs within certain ethical boundaries and is ethical by design. It will not be easy, but I think the timeline is right. If we look at other technologies, with my students I make the example of social media, with social media it's pretty clear that the genie is out of the bottle. It's too late to regulate social media platforms because we have started thinking about the ethical and societal implications of these technologies only when it was too late. I think with newer technology, we are still on time, but we have to act now.

Comments

Add a comment