Part 4: AI-manipulated digital content
What is AI, artificial intelligence?
A simple way to put it would be: machines that learn to think like humans. Computers up until now have reasoned in ones and zeros, yes and nos. That’s why if you wanted to give them instructions, you had to talk to them in their language: you’d have to learn programming and write code that other humans would have a hard time understanding.
Now you can give the computer instructions in a human language, and it’ll translate it to code and figure out what you want it to do. It’s a giant leap for the interaction between human and computer of course, and I don’t think anyone’s missed all the ups and downsides that are coming with it.
But we’re going to talk about it from a verification perspective. Because all of this means that it’s easier to generate false content than ever before: instead of yourself having to use your computer to create a manipulated image in photoshop, you can tell your computer what you want, and it’ll fix it in seconds. And that’s the same for text, video, audio, any thinkable media form.
So, we are already being flooded by low quality fake content, and as the quality rises, as it inevitably will, the dangers of AI-driven disinformation are likely to get increasingly severe.
Let's look at a couple of AI images, just to see the current state of the art.
The pope is riding a skateboard, in very high quality.
Elon Musk, in a less luxury life a hundred years ago.
Two girls that don’t exist – and this one is quite impressive. If you look at the colours in their faces, you can see that they actually make sense. Where blue and yellow meet, there’s suddenly green. But if you look closer, you can actually see that the colours are almost floating above their skin, it doesn’t look like a completely natural layer. But more about that later.
Here’s Ryan Gosling, a couple of years into the future.
And some kind of sport tournament, out in the open. Here are plenty of visual clues too – the more chaotic an image is, the likelier it is that the AI gets some things wrong. Look at the cables for example, or the duplicating screens.
Then let’s get closer to the disinformation sphere. These kinds of pictures were some of the first to be AI faked – Donald Trump carrying weapons or being arrested. The latter is perhaps the most likely to lead to real world harm in a precarious situation – imagine a situation like January 6th, the storming of the Capitol, if there are suddenly life-like pictures claiming to show Donald Trump being arrested. That’s when AI generated content, even such that would otherwise be deemed unbelievable, could quickly escalate a crisis. And that’s when we as journalists need to be prepared.
So, let's look at some videos. This is an early harmless example of a “deepfake”, which means AI-generated content where a real person is shown to do things they’ve never done. It went viral a couple of years ago, and I don’t think the purpose was for anyone to believe it – then they might’ve left out the cat. But just to show what content could look like in 2020 or so.
This is a newer example. As you can see, it’s still satirical and not pretending to be real. But it’s much higher quality, the transitions are smooth but also covered up by cartoonish scenes to make sure the AI:s limitations aren’t too visible.
Then let's again get closer to the disinformation sphere. Because this video is also clearly fake, there’s little risk anyone would think it real, but it also carries a political message: Putin is high energy, Biden is low. It’s not disinformation, but a sort of classical propaganda.
Then let's look at this video, which went viral on Tiktok, Youtube and X before the EU elections this year. This one is actually claiming to be real. And it spread enough, in fact, for the Reuters verification team to write a fact check about it.
Let's look closer at this case. Because it followed a common recipe for disinformation virality.
- The video was uploaded by several accounts simultaneously
- This was combined with a flood of comments from obscure accounts talking about the decay of the West, the loss of family values, how bizarre it is that this man would turn Europeans against Russia.
- Then ordinary users begin asking “is this true?” in the comment field, and even reposting the video while asking this question. Thereby they unintentionally contribute to the algorithms spreading this content even further
- By the time the Reuters verification team has fact-checked this video, it’s likely that another one has already gone viral – because creating disinformation takes way less time than debunking it.
- Therefore, I think we’ll need a shift not just among us journalists, but among the general public. Instead of looking at suspicious content and asking, “is this true?”, as we’ve learnt to do since the birth of the internet, we now need to look at it and think “it’s not true until it’s been verified”. Otherwise, we’ll never keep up with the falsehoods.
So how can we detect AI content?
There are tools, but most are still rather unreliable and inconsistent. Therefore, traditional journalism is often the way – we disprove the content, by disproving the claim. This is the opposite of what we talked about during reverse image search – there we could disprove the claim, by disproving the content. That’s the easiest, quickest way, but with AI content it’s often not possible. Therefore, we need to take the slower route and prove that what’s claimed to be shown didn’t happen, and therefore the content must be fake.
But there are a few red flags, that could signal that a piece of visual content has been AI generated:
- The Three-finger Problem – more complicated details, such as fingers, might’ve been incorrectly generated. The AI understands there should be fingers, but not always that there should be five of them. This is the same problem as we saw with the cables in an earlier picture.
- The Disney Hair Dilemma – hair is another thing that’s hard to generate, and AI often makes up for it by making it a bit too perfect. If you see too smooth, almost cartoon-like hair, that could point to some kind of manipulation.
- The Symmetry Senility – AI programs often forget to make sure the content is symmetrical. A piece of text might follow a pattern, but if you look at the beginning and end of it, it’s often as if the program forgot where it started along the way. This is especially striking with pictures and videos, where for example earrings might look different from ear to ear, when the AI creates the second one, it might’ve forgotten what the first one looked like.
- The Melting Man – AI content is generated as one piece, one single flow of AI consciousness, which means that it often treats the whole image as one object. Therefore, it might not understand that the pope’s glasses aren’t actually part of his face, which means that you can actually see them melt together in this picture. We can also see that his hand is melting here – so looking for signs of melting in places where the AI might’ve found it hard to generate something, could prove valuable.
So, to sum it up: unfortunately there are few shortcuts to verifying AI generated content right now. There are signs we could look for, to raise suspicion about a piece of visual content being fake, but then it’s often up to traditional journalistic methods to prove that something didn’t happen – for example calling and talking to the right persons or going to the scene.
With that, let's jump to the exercises.
Exercise 1: Bellingcat recently tested a tool developed to detect AI images, called AI or not. It had quite a good success rate, often better than what the human eye could manage. Unfortunately, it’s only free for ten images per month, and requires registration.
Still, it’s a good way to experience the current state of the art of AI detectors. If you don’t want to register, you can just read along to get the results. But if you’ve done so, upload this picture to the program and note the result:
https://drive.google.com/file/d/1FOUeb_FE6sucihw5ysSq0tsix3PQ7ha1/view?usp=drive_link
Exercise 2: If you got the same result as me, the first exercise should’ve correctly shown that the image was AI generated. But try to instead upload the version below to the program.
https://drive.google.com/file/d/1V9a664Ofg_M2Yl3x2YnkmRJamjgUGPGo/view?usp=drive_link
Now, it likely says “human”. That is because this version of the image has a lower resolution. This is one of the primary weaknesses with these programs: on social media, we often work with low quality pictures. But the lower the quality, the higher the risk that AI detectors will give false results.
Exercise 3: This image is AI generated – what does the program say? Could you spot something with your bare eyes?
https://drive.google.com/file/d/1ehxz3pj4td0CesddoJDxBTqQvVpPkxkg/view?usp=drive_link
Exercise 4: As we’ve seen, the programs are fallible – and while the human eye might at first glance be fooled by an AI image, there are still visual clues that we can learn to look for.
What in this picture could tell that it was AI generated?
https://drive.google.com/file/d/1N1-tqB1z-lnYV4jqWcwcT83HD_SPLuFB/view?usp=drive_link
References:
Reuters Fact Check: Macron dancing clip is altered 80s nightclub footage
https://www.reuters.com/fact-check/macron-dancing-clip-is-altered-80s-nightclub-footage-2024-03-21/
The Guardian – How to tell if an image is AI-generated
https://www.theguardian.com/technology/2024/apr/08/how-to-tell-if-an-image-is-ai-generated
McGill University – How to Spot AI Fakes (For Now)
https://www.mcgill.ca/oss/article/critical-thinking-technology/how-spot-ai-fakes-now
Bellingcat – Testing AI or Not: How Well Does an AI Image Detector Do Its Job?
https://www.bellingcat.com/resources/2023/09/11/testing-ai-or-not-how-well-does-an-ai-image-detector-do-its-job/
AI or Not – https://www.aiornot.com/