How to spot AI-generated images

5 April 2023

Despite convincing photographic evidence to suggest otherwise, the Pope did not wear a high-fashion puffer coat and a picture of Julian Assange in Belmarsh prison wasn’t recently leaked. Both images were created using AI. 

While you might not have been fooled by pictures of Vladimir Putin arriving at court in an orange jumpsuit, Donald Trump being arrested on the streets of New York, or Elon Musk holding hands with the CEO of General Motors, AI images like these are becoming more common and arguably harder to identify.

Although the images may look genuine at first glance, there are clues that they aren’t real. Here’s what to look out for when trying to spot images created using apps like Midjourney or DALL-E 2.

Honesty in public debate matters

You can help us take action – and get our regular free email

Is it likely to be true?

The same goes for any picture you see online that you suspect might be fake. Does the scenario seem realistic?

The Pope in a puffer coat did fool some people, possibly because popes have accessorised extravagantly in the past. In some circumstances, it appeared next to a genuine photograph of Pope Francis signing a gifted Lamborghini (that was later auctioned for charity).

But it’s important to ask yourself if the situation depicted is likely to have happened. Is it realistic that President Macron of France would have been caught up in the crowds of a Parisian protest? It’s not impossible, but it does seem unlikely.

Bizarre and surprising photos of famous people obviously do exist. And there can be a tendency to believe things that back up our world view. But if it seems too good to be true, it may well be.

The image’s history

As with any image you suspect might have been edited, fact checking usually starts with an attempt to find the original. You can do this by using a reverse image search tool like Google or Yandex to find where else the image has appeared online. 

You may come across an early edition of the image where the creator admits how they made it. If the image is only appearing on social media sites and hasn’t been published by a reliable news website, this may be a clue it isn’t real.

Something else to check is whether images of the scenario taken from different angles also exist. A politician or celebrity doing something unexpected whilst out and about is likely to have been photographed by multiple people, producing slightly different pictures of the same thing. You can do this by searching for images using keywords describing the image in question.

Searching for images of the Pope in a puffer jacket shows only one image where the coat and crucifix are exactly the same colour and design, which can be another indication the picture isn’t real.

With the fake image of Julian Assange, a clue that the image wasn’t a genuine leak was that Mr Assange’s wife, who is active on Twitter, hadn’t shared it, and nor had any reputable media outlets. 

However, this isn’t always a guarantee. Former President Donald Trump recently shared an AI image of himself kneeling in prayer. 

Look for classic AI mistakes

Because the tools used to create AI images are being improved all the time, these tips might not work in future. We’ll keep updating this guide as things change and we collect more tips on how to spot fakes. But right now, these are some giveaways within the image that it might have been made with AI.

Fingers

It’s not a guarantee, but often AI-generated images of people mess up when it comes to hands, especially the number of fingers.

A classic example of this can be seen in this AI-created image apparently showing armed children, where the programme has struggled to create a realistic left hand for the child in the middle.

An otherwise extremely convincing set of images created using Midjourney of people at a party was most obviously let down by the people pictured having too many fingers, with some digits appearing to float in mid-air or emerge from hands in an unusual way.

But fingers appearing as you might expect doesn’t prove the validity of an image. Sometimes AI does get fingers right, and AI image-generating software is likely to make this mistake less over time as it improves.

Inconsistencies with features and accessories

If you zoom in on the fake party pictures from above, you might also notice that the ‘people’ in it not only have too many fingers, they also have far more teeth than expected.

The fake picture of Julian Assange that we recently fact checked also had some visual discrepancies. If you look closely at his outfit, there’s a mysterious colour change on the arm despite it appearing to be a single garment. The hair follicles on the side of his head look drawn on and he almost looks like he has a nose ring.

In the image of the Pope we fact checked, one of the images showed him holding what looks like a coffee cup by its lid, rather than around the cup as you might expect, and his crucifix appears to have half its chain missing.

Again, these things alone don’t prove the image has been made using AI. Sometimes when you take real photos, facial features and textures can look a little odd. But multiple inconsistencies may be a clue that the picture isn’t real.

An odd gloss or sheen

Many of the AI images we’ve seen have a kind of odd gloss or sheen to them, that almost look like a cartoonish filter has been applied. Genuine photographs, especially those taken on modern smartphones or professional cameras, just wouldn’t look like this

Some of the examples above of President Macron have this tell-tale look. A few of the Midjourney-created images of Donald Trump getting arrested also have the slightly cartoonish filter (several of the police officers also have oddly blurred faces).

These pictures of prominent UK MPs in surprising job roles may be obvious fakes because of what they show and because they appear altogether, but some have an artificial smoothed look that is often a giveaway an image has been made with AI.

Specialist in manipulated or artificially generated media Henry Ajder described this as an “aesthetic sheen” to the Washington Post, saying: “AI software smooths [faces] a bit too much and makes them look too shiny.”

Dodgy text

As Luke Bailey, head of digital at the i Paper pointed out, although AI can usually recreate logos, it appears it currently struggles to recreate text, resulting in the garbled text on the McDonalds bag apparently being held by Prince Harry.

Text can easily be added to AI-made images afterwards, so this isn’t a fail-safe method in ruling out AI being the source of an image.

It might take some detective work

Often when we fact check photos that have been edited (rather than AI-generated), it’s pretty easy to conclude that’s what happened. For example, when we fact checked this image of Suella Braverman apparently looking at a model of Auschwitz, we found the two original images that had been edited together. There was no doubt the composite image in question was fake.

It’s often harder to conclude that an image has been generated by AI. But if following the tips above produces some red flags, you might want to hold off sharing.

Featured image courtesy of Marten Newhall

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.