Thank you for the lasagne r/ukpolitics

30 September 2025
A screenshot of (the other) Chris Morris in Brass Eye

I’ve officially dipped my toes into the world of a Reddit Ask Me Anything (AMA).

I, Chris Morris (no, not that one) was invited by r/ukpolitics to host a session, where users could ask questions on fact checking, misinformation and politics—and apart from getting writer’s cramp—it turned out to be a really good discussion.

We covered a lot of ground

We tackled some meaty topics: quite literally when it came to the infamous Wembley lasagne, how much tougher fact checking could become over the next decade, how we decide which claims to check, why some urban myths refuse to go away, and the role AI can play in strengthening the fight against misinformation.

My favourite questions


Hi Chris, it's been an awful decade for discerning fact from fiction so thank you for dedicating yourself to making that a bit easier. How much harder do you think the job is going to become in the next ten years?

And, if I may ask, during the next decade what is the one thing you wouldn't want to happen?

I mean I think we have to accept that the job is going to become harder as technology makes it easier and easier to create convincing fakes. Don’t forget just a few years ago you needed an awful lot of computing power and know-how to make even a vaguely plausible fake video. Now we can all do it in a few seconds with the device in our pocket.

So if tech is going to be used to deceive people even more effectively, we also have to make sure we use its vast potential as a force for good. That’s why we’ve spent years building AI tools to help small teams of people deal with false and misleading information at internet scale. Part of the answer is also education - and I’m talking about life long learning, not just for kids - making sure people have the critical thinking skills to navigate new online worlds.

As for the one thing I really don’t want to happen: I don’t want to reach a point where the majority of people don’t believe anything they read or see or hear anywhere. If people don’t believe you, they won’t trust you. And, in a democracy, if there’s no trust there’s no consent. That worries me. Scepticism is excellent, but blind cynicism takes us down some dark paths.


I'm extremely worried about misinformation in the UK and how charged everything can become. Like Terry Pratchett said, a lie can get halfway round the world before the truth has its boots on.

When nobody seems to care anymore when the truth does come out, how do you combat misinformation?

What would you do if you were in charge of the country?

Love a Terry Pratchett question, even if he wasn't the first to say it. Yeah, the scale of misinformation can feel overwhelming but we know there are plenty of people out there who care about evidence-based information. Every day we see people reading our fact checks, engaging with us on social media and asking us to check things! That tells us there is an appetite for what we do.

As I wrote in another answer, numerous academic studies have shown that when fact checkers label content this restricts the spread of misinformation. We also know that where fact checkers have been able to intervene earlier in the life-cycle of a false or misleading narrative - 'prebunking' - this can help to throttle misinformation before it gains viral currency online.

I think technology is also part of the answer. We know AI is turbo-charging the spread of false and misleading information, but the only way we can fight against that at internet scale is to use tech for good. That's why we've built AI tools which allow small groups of people to monitor vast amounts of information. We've trained an LLM to recognise the language in which claims are made, and our tools can monitor millions of sentences a day. We're also using GenAI to suggest which claims might be the most harmful (that could be damaging health misinformation or financial scams) and therefore the most important to prioritise.

But - big but - we always keep humans in the loop. Anyone who tells you AI can fact check all on its own hasn't taken account of the need to add context and caveat.


Hi Chris, thanks for doing a AMA -

The last year has been pretty hard for ONS, who have come under sustained fire for the quality of labour market data and other economic statistics. Immigration figures are only now catching up to reality after 10 years of grumbling. Response rates are plummeting around the world, as part of a worrying trend that was already observable in the data ten years ago.

What do you feel is the correct way to account for ONS data quality when fact-checking? When good-faith and bad-faith actors can both cite significant and real quality issues, a fact-checker can lose credibility through the unquestioning use of national statistics or risk having their work handwaved away for relying on statistics that 'everyone knows are rubbish'.

Hi - it's a really good question. Generally speaking, I think we’re really lucky to have access to the official statistics that we do in the UK - certainly when you speak to fact checkers from elsewhere in the world, it’s often striking how different the situation is where they’re based. But as you say there has clearly been a series of issues with ONS data identified recently, and it has work to do to rebuild its reputation.

So, we frequently cite ONS data in our fact checks, as they’re often one of the best sources available to us, and we will continue to do so. But where there is concern over the quality of ONS data - for example with the Labour Force Survey - we’ve made a judgement call over how much to rely on those figures, and have caveated our use of them (e.g. in this employment explainer).

Where necessary we sense check how we use ONS data with other experts too - i.e. talking to the Migration Observatory as well as relying on the ONS net migration figures.

We’ve also written directly about some of the issues seen with ONS data, such as an error with April’s inflation figures which prompted confusion and which we covered here.

But for all the issues with the ONS, which need to be addressed, I'd rather have that challenge than the alternative of politicised statistics which are under ruling party control, or the example of Donald Trump firing the head of the Bureau of Labor Statistics when he didn't like the job numbers.


How accurate is the government tracker and how well is the government doing on average with regards to promises/commitments achieved vs promises/commitments unkept?

We're happy that what we have in the tracker is accurate - it’s put together in the same way as all our fact checks, and to all the same standards of accuracy and impartiality. But I suspect your underlying question is really about how much it can tell us about how the government’s doing, and that’s a bit trickier.

We’re clear that the ratings we award in the tracker are often a judgement call, and also that the tracker can only tell us so much - we’ve looked at a sample of the promises the government has made but it’s only a sample. We will add more when time and resources allow!

So while it’s tempting to just look at the numbers at the top of our tracker - and of the 80 pledges we’ve looked at so far there’s been at least some progress on roughly two thirds - they don’t tell the full story. There are all sorts of important caveats they come with, and also completely different ways you can monitor a government's performance e.g. by tracking key metrics not pledges - which might give a very different picture.

Our politics editor’s just written a blog on this exact point actually here.

Finally it's worth acknowledging what we all know - politics is as much about the 'how do you feel?' question as it is about specific data. So this is one measurement, but only one.


What kind of process do you use to make sure that Full Fact doesn't introduce its own bias (or that of its staff) into the content it puts out?

I'm interested in how you decide which facts to check as well as how you go about checking them and publishing the results. Rigorously checking statements made by one politician while ignoring those made by another would be its own kind of bias even if the fact checking itself were done fairly. (Note I'm not suggesting you do this - I'm just interested to know how you go about making sure it can't happen)

Well obviously as soon as you open your mouth or start to type you are starting to make subjective choices. But there’s a difference between holding a professional opinion and a personal one. We focus on the facts, and impartiality is one of our charitable objectives. So I’m pretty confident we have good systems to minimise the risk of bias influencing our work.

When it comes to how we choose what to fact check, we deliberately look at a balance of claims from across the political spectrum in the UK, and from different sides of key debates. We don’t just pick one side. And we certainly don’t support one side or another. We also have constant discussions about accuracy, balance, and evidence. We’ve written a blog about this here.

We’re also fully transparent about our sources and methods - everything we do is open to scrutiny. So for example we always provide links to the main sources for our fact checks - and that means anyone who really wants to can reverse engineer what we’ve done. Each fact check also goes through multiple stages of review before publication. And when we get something wrong, we correct our mistakes.

More broadly, we have a cross-party board of Trustees, a conflict of interests policy, and restrictions on staff political activity for all our staff. So, we try really hard to protect our independence and impartiality.

You have the party line!


Why do you think certain urban myths, like migrants eating swans, prove to be so enduring and difficult to debunk?

Well it’s partly because urban myths are usually good stories, and everyone loves a good story. Talking of swans, we looked at Nigel Farage’s claim about migrants eating them last week - you can see our full fact check here.

We found there wasn’t any evidence to support his claim about migrants eating swans in Royal Parks, and the Royal Parks charity says no such incidents have been reported. So that's pretty clear.

On the other hand there have been various media reports over the years about swans being caught and eaten elsewhere, but in many cases the identity of those responsible was not verified. And that is where it all gets a bit fiddly.

The broader point is that there are certain types of claims which are persistent but intrinsically quite difficult to prove or disprove. So while we may have found very few verified cases of migrants eating swans, that doesn’t reliably tell us how often it does or doesn’t happen. And so in the absence of good data it’s sometimes difficult to say definitively that such a claim is false, and that gives room for claims which play into certain narratives to endure.

It’s worth adding that even when we can say definitively that a claim is false or misleading, we often see it crop up again - we call these ‘zombie claims’, because no matter what we do they simply refuse to die. There are some claims we find ourselves correcting over and over again - whether it’s a fake Donald Trump quote or a dodgy claim about MPs’ breakfast expenses. The internet has a short memory…


What do you think about using AI in fact-checking. For example, Grok on X, where people ask “is it true?” Quick answers can be helpful, but what happens to the role of fact-checkers? Do you think tools like Community Notes and AI bots could replace them soon, as some platforms seem to want? And are there any safeguards or precautions Full Fact are considering?

Gen AI chatbots, like Grok, can be a quick and useful source of information but the devil is in the detail. And we all know they get things wrong. If the bot has access to enough information/data, and what it’s being asked to assess is relatively straightforward, that’s fine Grok may also link to reliable source material too, including fact checkers.

But, where the bot is unable to make a clear and straightforward assessment or is engineered to respond to inquiries in a particular way, it’s then replacing fact checkers in a harmful way.

As for Community Notes, online platforms, like X and Meta, are keen on using them as a replacement for working with fact checking, but we have some serious concerns about the reliability and speed of these notes at the most critical times when misinformation is escalating. We wrote about this recently.

One of the problems with Community Notes is that they rely on a bridging algorithm which prioritises consensus rather than factuality. It's much harder to reach consensus on the most controversial issues, which are usually those where getting the facts right is most important.

So, we think there are some clear improvements that X and Meta could introduce to improve Community Notes, drawing on our experience of fact checking.


fb_wembley_(1).jpg

Chris, was the Wembley Lasagne real after all?

Good to see this question being consistently upvoted. I once had a Wembley lasagne. True story. It was on the day that Southampton beat Leeds in the play off final to return to the Premier League. That didn’t work out so well, so maybe I blame the lasagne.


Hi Chris, with the current state, and trajectory, of media and online discourse I feel wider awareness/adoption of non-partisan fact-checking organisations and services are critical to global democracy. It's an area I'd be very keen to work in at some point in the future. What advice would you give to people looking to work with the likes of Full Fact and what's the profile of a great fact-checker?

Good question… and good luck with whatever you end up working on. In my experience the one thing you can guarantee about a great fact checker is that they won’t think they’re a great fact checker, because they’ll always be second guessing their last fact check and thinking about what angle they’ve missed. Fact checking is a humble activity (most of the time)...

That said, there are obvious things we look for. Critical thinking is absolutely key. Great research skills, clear writing. The ability to approach a topic impartially, and fairly.

It’s crucial that you have an interest in how the world works, and a decent understanding of news and current affairs. In all of our fact checker interviews we ask a series of general knowledge questions, not because we expect our fact checkers to be walking talking Wikipedias, but because if you don’t have a rough idea of how many people live in the UK, or the size of the NHS waiting list, or the cost of an iPhone, it can be much harder to spot claims which are likely to be worth fact checking.

Finally, increasingly our fact checkers are using specific skills to combat online misinformation - OSINT research and geolocation. One of our team, Charlotte, wrote a blog about how she uses those kinds of skills recently here.


What happens now that the AMA is over?

Fact checking is a team sport, and now we’ve introduced the sub to our process. This is your chance to flag any claims you think we should be looking at.

Tip-offs and recommendations are one of the best ways to contribute to the work Full Fact does.

If you’re looking for something a little more regular, you can also get updates by signing up to our weekly newsletter.

While the AMA is over for now, I thoroughly enjoyed taking part, and I would be open to doing another one in the future. If you’ll have me back.

Our content is free to read but not to write

If I can leave you with one closing thought, it’s this: we’re a charity who are independently funded. Which means we keep the lights on through a range of donations from organisations and individuals.

Full Fact relies on public support to remain impartial, independent, and secure.

Thanks again for having me!

Chris

Related topics

Social media Government Tracker

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.