How AI helps us detect 100,000 potential claims a day

2 April 2021 | Andy Dudfield

In 2019 Full Fact, Africa Check and Chequeado received a $2m grant from Google.org supporting work to build a new AI tool to fight bad information. We’re pleased to be able to share some of the results from this ongoing partnership.

Our experience gained over the last decade tells us that fact checking is hard, and complicated. We know when there’s nuance involved, humans are better than machines. But we are also clear that technology can definitely help us be more efficient.

Full Fact’s Automated Fact Checking team is dedicated to ensuring our technology helps us to identify the right things to fact check, makes fact checking faster and making each fact check we write work as hard as possible. To help support these aims we were lucky to be joined by seven Google employees as part of the Google.org Fellowship. Together we developed new tools to help fact checkers spot potential claims online. 

We can use this new machine-learning tool to automatically identify checkable “claims” across online and broadcast media. At the moment, this averages to around 100,000 claims a day - 1,000 times more than what we were able to detect previously.

The technology built by Full Fact, Africa Check, Chequeado and the team of Google.org Fellows is an example of incredible global collaboration to tackle one of the world's hardest problems. The fact that the AI model has boosted the number of detected claims by 1000x across 4 languages, and 3 continents is very impressive.

Cong Yu, Team Lead, Google Fact-Checking.

So how does it work?

We define a claim as the checkable part of any sentence which is made by a politician, journalist or online. 

There are many different types of claims - ranging from claims about quantities (“GDP has risen by X%”), claims about cause and effect (“this policy leads to Y”), predictive claims about the future (“the economy will grow by Z”) and more.

We have developed a claim-type classifier to guide fact checkers towards claims that might be worth investigating. It helps us to identify and label every new sentence according to what type of claim it contains (whether it is about cause and effect, quantities, etc.). This still produces a lot of claims and we needed to do further work to allow fact checkers to identify the most important ones within it.

To help address this we worked with our Google Fellows to build a new system, one that helps us focus our work even further on to the most important claims. 

It uses language-processing to group multiple claims to one person. We all understand that “Johnson,” “the PM” and even “BoJo” all refer to Boris Johnson, but teaching this to a machine was more difficult. Now that we have this running we are able to easily and comprehensively list all claims made by key individuals in the last 24 hours.

We are starting to use a search engine and AI to scan thousands of articles to find similar claims that someone has made even if they use different terms (e.g. $1B, $1 billion, $1,000,000,000).

These kinds of techniques help highlight which of the hundreds of thousands of identified claims require fact-checking - as well as providing links and tools for a fact checker to investigate the validity of a claim.

By design, our AI  models work in multiple different languages. So our fellow fact checkers in South Africa, Nigeria, Kenya and Argentina have all been able to publish new fact checks using these techniques.

And at Full Fact we recently published fact checks on the housing cladding crisis and vaccine side-effects. These claims were originally detected using AI tools.

But these tools will not work without good access to data. We have previously called for improved access to reliable and thorough data from government and statistical bodies to help support this work. And we will continue to.


Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.