Review and reflections: Truth and Trust Online 2019

21 October 2019 | David Corney

On October 4th-5th 2019, more than 200 people attended the first Truth and Trust Online (TTO) Conference, to discuss how technology and AI can improve the trustworthiness of online spaces.

Full Fact was one of the organisations behind the conference, along with major tech platforms, universities and other non-profits.

Our venue: BMA House

TTO is unusual as it offers a unique collaboration between practitioners, technologists, academics and platforms to share, discuss, and collaborate on useful technical innovations and research in the space.

We summarise the event and reflect on some of the key takeaways, and you can see a list of all the contributed papers in the proceedings.

Session 1 - Industry Perspectives

The conference was opened by Nishant Lalwani (Luminate). He showed that hundreds of millions of dollars of ad revenue is being spent every year on sites known to push misinformation and disinformation. One response from Luminate is to fund tech companies that can identify and push back against such sites at a web scale. He explained that they also fund independent media around the world as a pillar to support a just and fair society.

The rest of the morning consisted of industry talks from three of the biggest tech companies in the space. Jerome Pesenti (Facebook) talk about using AI to proactively identify problematic content, before users notice. “Deep Fakes are a social and legal problem, not just a technical one”.

Dan Brickley (Google) discussed the ClaimReview schema: fact checks are hard to do and risk being swamped by content - so let's make them as visible as possible.

Michael Golebiewski (Microsoft Bing) discussed search patterns that lead to misinformation or other problematic content. In particular, search that leads to “data voids” - topics that credible sources don't cover, so search leads to unreliable sites.

Session 2: Fact checking

The afternoon session on “Fact Checking, Verification and Spread” started with a panel discussion on “Factchecking on the Front Lines”, with Will Moy, Full Fact, UK; Olivia Sohr, Chequeado, Argentina; Lee Mwiti, AfricaCheck, South Africa; Govindraj Ethiraj, BOOM Live, India; Farhad Souzanchi, FactNameh/ASL 19, Iran. Each panellist gave a brief overview of their experience of fact checking, followed by a brief Q&A chaired by Mevan Babakar (Full Fact).

From left: Will Moy, Rob Proctor, Sharon Ly, Kate Starbird

The breadth of challenges faced by the panel was staggering. A change in government may lead to a change in how official data is made public, and whether it remains trustworthy; operating across multiple countries and multiple cultures presents its own problems; and holding autocratic regimes to account presents serious difficulties.

The discussion made it clear that different cultures have very different ways of sharing information, and therefore are affected by disinformation in different ways. When developing tools to help, it’s not just the language that needs translation, but potentially cultural aspects too.

Next, David Corney (Full Fact) spoke on The Promise of Automated Fact Checking, and described a new tool that helps fact checkers review recent claims to help them decide which ones to fact check. This includes multi-lingual modelling, allowing the tool to be used by fact checkers in low-resource organisations and countries.

Dr Abigail Lebhrect (Mumsnet) spoke on Integrating automation in an established moderation process. She described the long-running forums at Mumsnet and how they have relied on manual moderation. They are now planning to support this with automation.

Roy Azuloy (Serelay) presented his company’s work on fighting deep fakes and other forms of misleading image manipulation. He described tools to provide verifiable photos and videos. This can reduce misinformation by preventing genuine videos being presented with a false context.

Paolo Papotti (Eurecom) presented recent work on Explainable Fact Checking with Probabilistic Answer Set Programming, which uses a knowledge graph to both perform probabilistic inference and to generate explanations.

Tracie Farrell (KMI, Open University) discussed how personal values influence how we are affected by misinformation.

Leon Derczynski (University of Copenhagen) asked to what extent can we use the stance that social media posters take towards claims to determine the veracity of the claim itself? Often, if lots of people query or disagree with a claim, it turns out to be false, and this can be used in automated veracity prediction. He also highlighted the difficulties in working with non-English languages, which often have more limited technical resources to build NLP solutions.

Finally, Scott Hale (Meedan & Oxford Internet Institute) discussed the real-world needs of fact checking and content moderation. Academics and practicing fact checkers often have different goals. He urged all groups to work together over the long term, and to talk more!

Session 3: Trust and Credibility

Day two opened with a talk from Rasmus Nielsen (Reuters Institute for the Study of Journalism), reviewing the current state of trust in journalism, especially from a news-consumers point of view.

When reading news found via social media channels, many users don’t remember the brand generating the content, which profoundly affects trust (and how it can be built). He argued that communication is pragmatic, and not just literal. Meaning is not intrinsic in the content. This means that the focus of fact checking must shift from testing the veracity of content to broader consumer perspectives, if we are to have a positive impact.

Rasmus Nielsen (Reuters Institute for the Study of Journalism)

The rest of the morning session continued to focus on “News and News Credibility”.

Judy King (BBC Monitoring) gave an overview of BBC Monitoring, which employs over 200 (mostly multilingual) journalists around the world to monitor disinformation. Expert, local knowledge is so important, especially when combined with modern technology.

Jérémie Rappaz (EPFL Media Observatory) presented an analysis of the media landscape and showed that the choice of stories covered by news organisations is influenced by syndication and ownership, as well as location. This makes diversity of news sources a greater challenge.

Anja Belz (Brighton University) talked about automated journalism and natural language generation. She showed that AI-generated text is pretty easy to spot, so doesn’t itself pose much threat of misinformation. However, using carefully constructed templates can allow robo-journalists to generate ready-to-publish text from newly published figures.

Arkaitz Zubiaga (QMUL) analysed hoaxes on social media and their early detection. At least one hoax celebrity death a day goes viral, but often a quick check using wikidata is enough to check if a celeb is *really* dead.

Martino Mensio (KMI, Open University) presented a comparison of the work of several organisations in assessing the credibility of different news sources. The agreement is far from perfect, suggesting we don't yet have an agreed universal measure of credibility.

Session 4: Stance and Extremes

After lunch, Kate Starbird (University of Washington) discussed patterns of disinformation found on Twitter and argued that the best unit of analysis in this area is the campaign, and not isolated pieces of information.

She showed how the modern notion of online disinformation has strong similarities with historic "dezinformatsiya" and can easily seep through online discourse undetected. Some time after her analyses of #BlackLivesMatter and #BlueLivesMatter activity, Twitter published a list of Russian bots. Kate cross-reference these and discovered that central voices on both sides were Russian bots. She also explained that people actually see more diverse information online, countering some concerns about filter bubbles.

Rob Procter (Warwick University) discussed the merits of centralised and decentralised responses are most effective at tackling online hate speech, and social media governance strategies.

These talks were followed by a panel Q&A where Kate Starbird and Rob Procter were joined by Sharon Ly, Director of Engineering at Twitter. Sharon described some of her work on encouraging healthy conversations on Twitter, saying that "disinformation just muddies the water for everyone." She also explained how use of Twitter varies greatly from country to country, such as it being commonplace to have multiple accounts for different conversations in some countries. 

The presentations continued with Preslav Nakov (QCRI) argued that raising awareness of misinformation can be part of an effective fight against it. Time is critical, as false information travels far faster online than the truth.

Tom Stafford (University of Sheffield) presented a psychological perspective of how we decide who and what to trust. He showed that even though cognitive biases are inevitable, we can still encourage people to seek and trust reliable information. One main source of distrust is the sense that someone is "not on our side", and recognising this may help fact checkers reach their audience more effectively.

The last talk was by Amy Sippitt (Full Fact), who reviewed recent research in “belief change” and raised challenging questions for all present. Trust in scientists is still high and growing; trust in journalists has always been low but has shown some increase in recent years. Many questions remain, including what interventions by fact checkers are best at holding individuals to account, and what factors influence the spread of misinformation?

Some reflections on TTO’19

The summary above doesn’t begin to do justice to the range and depth of information shared from the platform during the conference. But in between these presentations covering fact checking, journalism, automatic verification, AI and of course, trust and truth, there was also plenty of time for chatting over coffee and food.

Bringing together technologists and journalists allowed many useful conversations to flourish and will hopefully continue to spark new ways of working together in the coming months and years. The challenge of disinformation will remain with us.

Trust in the news media is low and trust in news found via social media is even lower. People don’t base trust simply on expertise or confidence, but rather on who is apparently on their side. This is a great challenge to fact checkers globally: how can we earn the trust of our readers?

A different sort of challenge is found in the sheer scale of social media and digital news. Here, it seems that technology and AI are a necessary part of the fight against mis/disinformation. However, there is often a gulf between tools that are innovative and impressive on one hand, and the problems faced by practicing journalists and fact checkers on the other. Cutting edge AI presented at TTO’19 included hate-speech detection, stance detection, claim detection, fact verification and automatic text generation.

It is not yet clear how these are to helping users to find more trustworthy content or to have confidence in the truth of online sources. Perhaps the convergence of these two areas will be a theme at TTO’2020!

We would like to thank all the sponsors who generously supported the event and especially Google, Facebook, Twitter and Microsoft. See the full list.

Videos of all the talks will be available from late October. 

To stay up to date on automaton at Full Fact, including TTO2020, you can subscribe to our “automated fact checking” mailing list.

 


Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.