Fake AI faces are deemed trustworthy

Could you determine a human face generated by AI? Probably not…
21 February 2022

Interview with 

Sophie Nightingale, University of Lancaster

FACES_GENERIC

anonymous faces

Share

We have entered into a new era of technological advancement. Mechanical evolution may appear to have slowed down to the likes of you and I, but in the world of computer science there’s an unprecedented increase in novel developments. Part of this is down to the computing communities' open access ethos, domains like GitHub allow users to share, tweak and comment on new softwares. But there is a dark side to this genius, as Sophie Nightingale from the University of Lancaster explains to Harry Lewis. They start by looking at a website you can visit too, it’s called thispersondoesnotexist.com...

Harry - You are telling me - this is genuinely true - that this person doesn't exist. I'm confronted by a Caucasian, blonde female, and she looks maybe in her mid twenties, and she looks completely real, Sophie. I can't fault this at all. It looks like it's straight off Facebook or something like that; a LinkedIn profile, maybe.

Sophie - Absolutely. That is somebody who does not exist in the world.

Harry - Okay. Let's break this down: we are talking about these being synthesized by a type of algorithm by artificial intelligence. What does that consist of?

Sophie - This is a type of machine learning. It's a relatively new type known as generative adversarial networks or GANs. What's quite special about these is they use two neural networks which are pitted against each other. So, imagine a two-player game where you're in battle with your opponent: one of those networks is a generator the other is a discriminator. The discriminator is given a large collection or corpus of real images, and in this case we're talking about images of faces, of people who are real. Then, the generator's task is to try and synthesize an image that's good enough that it manages to trick the discriminator into believing it's a real face. Over time, it receives feedback from the discriminator and it will refine its parameters and eventually generate a face that the discriminator can't actually tell apart from those real images anymore.

Harry - So, if you were to start this off: you write the algorithm, you give it the information it needs to begin, and then you step back and you leave these two networks to themselves - there's no longer any human interaction.

Sophie - Absolutely, yes. This is what's known as unsupervised machine learning. There's no need for a human to do anything once you've given it the original corpus of images.

Harry - And in your research, you found that the average person trusts these synthesised faces, sometimes more than images of real life people?

Sophie - Yes. On average, we found that people's ratings of the synthetic faces were slightly higher than the real faces. Now, it wasn't a huge difference, but it was significant.

Harry - Why is that so exciting? And why is it also quite terrifying?

Sophie - This is an incredible advance in terms of technological capability and there's definitely potential to use these for good. For example, we can use and apply these to security and defence systems, but there's also the flip side of making this technology accessible to everybody and sharing it openly, which means that there's a lot of potential for harm as well: for revenge porn, financial fraud, adding to disinformation and misinformation on social media, and many other novel ways that perhaps we are not even yet aware of. And the other thing is the liar's dividend: it allows for any unwelcome recording that is in the media to be denied by somebody. They can simply just call in to question its authenticity.

Harry - I get the impression that what you're alluding to might be the future of where this technology goes. It's not just perhaps limited to pictures in the future, it could go into sources of video content and audio content?

Sophie - Absolutely. That's exactly right. I would say we're not far off.

Harry - And looking to the future with your research, I get the impression it's sort of a call to arms.

Sophie - If nothing else, the main thing I wanted to get across is that we need to do something about this. You could build and embed watermarks into images and video synthesis networks so that, down the line, when these come into play, we can check and actually have a way of reliably identifying if it's a synthetic image or video. It's important to do this now because it's likely that other forms of AI synthesised content, for example audio and video, are on the path to being indistinguishable from real content as well. Once that technology is released, we can't take it back. Once it's in the world, we can't then put it back into a box.

Comments

Add a comment