Framework for Information Incidents: Consultation summary

August 2021

Introduction

Since 2020, Full Fact has been working with internet companies, fact checkers and governments to create a new shared model to fight online misinformation: the Framework for Information Incidents.

The Framework introduces five levels of severity to build a shared understanding of severe incidents, helping to coordinate timely and proportionate responses to crises. It also proposes a set of the most common challenges that emerge for those trying to find and distribute reliable information and/or tackle bad information, and a set of possible shared aims and complementary responses for organisations to consider when developing a joint response to information incidents.

From March to June 2021 Full Fact ran a consultation seeking feedback on the draft Framework. In particular, the consultation looked at the utility and clarity of the Framework’s five level severity scheme and whether the categories of information incident are complete and distinct. The consultation also asked for feedback on the set of common challenges, and corresponding aims and responses, and whether the response methodology is logical and realistic.

In addition to running the global consultation, Full Fact convened a group of UK stakeholders to discuss and improve the use of the Framework in a national context. Full Fact also developed a simulation training exercise based on the Framework, which was delivered to 200 participants at a WHO training conference. This helped to test the practical utility of the Framework with people tackling health misinformation from different industries, and to identify improvements.

The appendix contains a list of every organisation or independent person who gave feedback in the consultation period.

Overview of feedback

This document summarises the responses to our consultation and the feedback provided in that period through other channels. The document is split into two halves:

  • Substantive feedback on key concepts which will take time to work through
  • Straightforward suggestions to strengthen the Framework which can be integrated quickly

The Framework was welcomed by respondents, who supported closer cooperation between different actors involved in tackling misinformation, and felt that it was a good step towards defining the term information incident and recommending proportionate responses in a way that serves the needs of the wide variety of stakeholders involved in tackling misinformation.

29 consultation responses were received, with feedback falling into two different types: straightforward suggestions to strengthen the Framework which can be integrated very quickly, and more complicated challenges to our thinking which will take time to work through and fold into the next iteration.

Throughout the development and consultation phase, several challenges emerged regarding the application of a framework like this globally. For example, respondents disagreed about removing harmful but legal content. One respondent asked us to avoid recommending removal of content in general terms as this policy is widely opposed across Latin America; meanwhile US and UK respondents questioned why removal of content was not listed as a recommended response, citing harmful Covid-19 misinformation as proof of the need.

Some respondents felt that all five of the Framework’s levels are easier to apply in stable democracies with a robust civil society sector. In countries where there is a generalised level of disinformation distributed by state and state-backed actors, counter-disinformation organisations could be constantly in a vigilant mode at Level 2 (or in a level above) without ever dropping to Level 1. This is a possibility, and raises the question of whether counter-disinformation workers are receiving enough support and resources to tackle the challenges they face in these contexts.

Another respondent argued that the Framework should be able to account for how many information incidents there are at a given time, or how to recognise a new one when it appears, or to tell whether an existing one has gone away. While we agree with the latter two uses, the Framework is primarily intended as a practical tool to enable those working to counter disinformation to raise concerns about potential emerging and existing crises, and collaborate with their peers within a shared terminology. Counting the number of incidents globally may be useful in understanding how crises emerge retrospectively but it is not the aim of this Framework.

Part 1: Substantive feedback on key concepts

Five levels of severity endorsed with suggestions for improving escalation and de-escalation criteria

The Framework proposes five levels of severity starting at Level 1, which describes a business as normal situation (in the majority of contexts), and moving up to Level 5, which describes a rare, serious and sustained information incident such as the beginning of the Covid-19 pandemic. Overwhelmingly, the five-level system was endorsed, with a valid question raised over the global applicability of Level 1, the need to develop clearer criteria for moving between certain levels, and some clear messages on who should have responsibility for deciding the level of severity of an incident.

Some respondents suggested that the criteria for moving between levels should be developed into a combined scoring system. Full Fact tested a variety of scoring systems with many different indicators early on in the research phase, and did not feel that a combined scoring system distinguished between different incidents in a way that could help users to better understand the challenges at hand or determine next steps.

Scope of Level 1 and its applicability globally

Several respondents said that Level 1 should capture and deal with the build up of misinformation over time, as a misinformation-filled information environment is fertile ground for the rapid growth of crises when they occur. For example, as one respondent argued, the tacit permission from platforms to allow health misinformation to build for many years prior to Covid-19 meant countries were ill-prepared to deal with the onslaught of misinformation brought about by the pandemic. One respondent felt that the phrasing of ‘business as normal’ at Level 1 - and the severity level system more generally - could dissuade users from considering preventative measures such as better regulation or audience resilience.

Another respondent argued that Level 1 is not necessarily realistic in many countries, where a high level of disinformation from states or state-backed actors is constant, although they believed that certain incidents may exacerbate the production/spread of disinformation. In these countries, a constant Level 2 might be the norm, where there is a need for regular discussion about emerging incidents, and whether something is becoming serious enough for a group of actors to address.

A question also emerges from this type of constant Level 2 standby scenario: whether counter-misinformation actors in those countries are getting enough support and engagement from those with the power to slow down the spread of misinformation on a mass scale, such as internet companies.

Whether Level 1 or Level 2 is the norm, that is not to say that there is no harmful or significant misinformation circulating.

Clearer criteria needed for moving between levels, especially Levels 2-3

In a convening of UK stakeholders, it was felt that indicators for moving between Level 2 and Level 3 should be made clearer. Full Fact developed clearer criteria for moving between those levels and conferred with colleagues in several organisations on this in the consultation period. These criteria are:

  • Content has significantly higher velocity, views or engagement than comparable content would typically have
  • Claim clusters and/or narratives are appearing across multiple platforms
  • Hashtags/search trends are emerging related to the misinformation; there may be evidence of coordinated inauthentic behaviour with minor/growing traction
  • Misinformation may be affecting people’s decisions or behaviour
  • Misinformation may be contributing to concerning long-term trust and / or participation trends (e.g. undermining health service provision)
  • Staff may need to move into response team/mode and put scheduled more routine work on hold yet are still able to respond to the majority of false claims (at Level 4 and Level 5 this is likely to no longer be the case)

One respondent suggested that the next iteration should contain simplified criteria for moving between levels based on the single most important criterion which reflects the severity of an information incident. However, the majority of other respondents and people we engaged with asked us to develop more criteria as they felt that this would make severity ratings quicker and easier.

No single body should hold responsibility for deciding a level

Generally respondents felt that no single body should make a decision about an incident’s severity, but that a cross-sector group could do so, including representatives from civil society including fact-checkers, local and national government (or former government representatives), press and media, relevant experts and academics and the tech industry.

It was felt that ultimately decisions should be made by organisations with high levels of transparency of funding. Tech companies were not seen as having proved themselves credible within the decision-making process and as having a financial interest in downplaying the severity of incidents. It was also seen as inappropriate for governments to unilaterally declare a level of severity.

Respondents suggested that the Framework would be best suited to a flexible approach where organisations can make their own determination and then confer with others where this was possible. Several thought it was unlikely that a single decision-making model could be developed to be applied internationally, as the nature of independence and legitimacy varies from country to country. There was also skepticism that different actors would agree on the same severity level due to each actor having different incentives built into making those judgements, and their judgement also being informed by having different capabilities to respond.

Defining an information incident and clarifying the function of incident

The consultation version of the Framework describes an information incident as “incidents that are likely to have a substantial and material impact on the people, organisations and systems that consume, process, share or act on information”. It gives nine categories of incidents that might be covered by the Framework, such as “Unexpected disasters with high, wide reaching impact” or “Human rights or freedom of expression abuse.” These categories are intended only to illustrate what situations the Framework might be applied to, rather than being used to inform the assessment and response process.

The categories of incident were broadly accepted, and consultation feedback highlighted three key interlinking issues to be addressed:

  • Respondents had different interpretations of an information incident and therefore what the Framework should cover.
  • The categories of incidents do not have an impact on determining the severity level or guiding response. This ambiguity could cause confusion for users unless clarified.
  • There are some interlinkages between incident categories, and some categories of incident could be subsumed by one another in practice.

A working definition for an information incident

Some existing definitions originate from cyber security but are usually used primarily to describe disinformation campaigns. For example, the Adversarial Misinformation and Influence Tactics and Techniques framework describes misinformation incidents as: “large-scale neuron hacks powered by hijacked and distorted narratives, using the deliberate promotion of false, misleading or mis-attributed information”. Full Fact’s Framework requires a definition which encompasses dissemination and sharing of false claims accidentally or with good intentions, as well as intentional or hostile sharing of false information.

Our working definition for an information incident within the Framework context is: “a cluster or proliferation (sudden or slow-onset) of inaccurate or misleading claims and/or narratives related to and/or affecting perceptions of/behaviour towards a certain event/topic happening online or offline.”

Clarifying the function of and distinctions between incident categories

There is a wide variety of situations the Framework can apply to, since many real world events can fuel misinformation and vice versa. The Framework presently sets out nine categories of information incident which are intended to illustrate the types of situations the Framework might be applied to. These do not have an impact on how users grade an incident’s severity, or build a response plan, but this fact needs to be made clearer, as consultation feedback highlights.

Respondents also pointed out that there are some interlinkages between categories of incident, and that in some situations categories might subsume each other. For example, planned political events might count as long-horizon events, and nationally significant events might also present an opportunity to exploit polarisation. Freedom of expression abuses might occur within or because of other categories of events such as war or mass detainment. These categories - and crucially their function - need to be clearer, but it is inevitable that some phenomenon will merge with each other as events develop.

Recommended responses need to correspond to the timeline of certain events and to different severity levels

The Framework recommends a set of responses to mitigate the impact of information incidents, but does not limit users to a predetermined list, as often local users will be best placed to determine response measures. Feedback highlighted the need for long-term and short-term responses for certain scenarios, and for recommended responses to be developed across different severity levels.

Some respondents remarked that some recommended responses would only combat challenges in the long term, not the immediate term, and vice versa. Others said that some recommended responses - such as emergency funding for media organisations - would come at too late a stage of a crisis to make the desired difference. We plan to revisit the chronology of responses to ensure that they work to the likely timelines of different crises.

The current Framework does not recommend different responses based on the severity of the incident. The next iteration should develop the recommended responses further to give examples of how responses might intensify or relax based on changing severity, for example indicating at what point core actors might helpfully involve organisations and groups in response which do not typically deal with misinformation day to day.

One respondent felt that, rather than cascading countermeasures in response to escalating severity, the Framework might be more effective by pairing countermeasures to the capabilities and resources of different organisations. We will consider how to integrate this using publicly available information about measures which have been trialled already or widely used.

Part 2: Straightforward suggestions to strengthen the Framework

As well as the above more substantive points that will take time to work through, there were many smaller suggestions which can be implemented almost immediately. These include:

  • Additions to and improvements on existing incident categories
  • Suggestions for developing the criteria for moving between levels
  • Suggestions for additions to the common challenges we proposed
  • Additions to the recommended response measures

Additions to and improvements on existing incident categories

As outlined above, the incident categories are not part of the process for assessing the severity of an incident and responding to it - but this needs to be clarified. Therefore we welcome these suggestions and will use them to improve how we indicate to users when and where the Framework can help them mitigate the effects of information incidents.

Respondents made good arguments for creating several new incident categories:

  • Incidents that undermine the ability of frontline workers and service providers to carry out their work (e.g. during the Ebola epidemic, misinformation circulated in the Democratic Republic of Congo saying that aid workers were bringing Ebola to communities, undermining people’s willingness to use health services and leading to attacks on aid workers).
  • Incidents relating to critical infrastructure (e.g. the Darkside ransomware attack on Colonial Pipeline may have contributed to panic buying gasoline resulting in shortages).
  • Incidents which compound an existing conspiracy theory (e.g. when the Ever Given was grounded in the Suez Canal a QAnon theory emerged linking the ship to the Clintons, saying that the vessel has been used for human trafficking).
  • Engineered and accidental virtual events which in themselves create information environments that require new, robust responses (e.g. the GameStop short resulted in billions of dollars being lost and shifted the financial information environment in both the short and long term).

There were also suggestions for adding detail to and altering existing categories, for example adding climate change and vaccine hesitancy to long-tail or long-horizon situations that might spark or fuel misinformation crises, be affected by proliferations of false claims, and distinguishing between deliberate attacks and unexpected disasters, on the basis that the mis- and disinformation surrounding these can be different in intent.

Strengthening the indicators for moving between different severity levels

There was consensus that the criteria for moving between different severity levels could be further developed to help users make decisions quicker. The key improvements we took from consultation feedback include:

  • Add an indicator on engagement, as this can help users assess the impact of early incidents and show whether content is having a particular effect on some audiences but not others.
  • Give more prominence to the type of accounts spreading misinformation, for example severity should increase when people engaging with misinformation have a higher reach in a certain micro or macro community, or when accounts have been identified as superspreaders.1
  • Clarify the urgency indicator to reflect the difference between increased risk of harm and time-sensitivity, for example voting mis- and disinformation about imminent elections is urgent in a different way to hateful mis- and disinformation which could have harmful or dangerous effects on marginalised groups.
  • Remove the level of collaboration needed, as this varies depending on the different actors’ responses capabilities and legal constraints.

Several respondents noted that some indicators are hard to apply in the moment - for example when misinformation is spreading on closed messaging apps it is hard to spot, or sophisticated coordinated campaigns that go beyond posting identical text, images or videos from young accounts. We will acknowledge this and point to existing guidance in the next iteration.

Additions and improvements to the common challenges across crises

The Framework identifies groups of challenges which can occur across different types of misinformation crises, such as threats to freedom of expression, information vacuums, a quickly changing situation, or immediate threats to public order and safety. We give examples for each overarching challenge to help users identify and understand the problems they might be facing.

While the overarching categories of challenges were widely accepted, respondents had numerous suggestions of examples we could include under each headline challenge. For example one respondent pointed out that misinformation crises can affect whether law enforcement understands rules and situations correctly, undermining public order and safety.

Another respondent suggested adding ‘reaching through language barriers to those affected by misinformation’ to the overarching challenge of ‘Difficulty disseminating or communicating information’. We will endeavour to include more of these excellent examples without overwhelming the utility of the Framework in its next iteration.

Some respondents proposed new overarching challenges, including disinformation that is supported or funded by a state or state-backed actors, although this could potentially come under a revised version of the challenge ‘Unhelpful behaviour by influential public figures.’

Suggestions for enriching recommended responses

The Framework recommends a set of example responses to mitigate the impact of information incidents, but does not limit users to selecting only the measures we have highlighted. As described above, we will be developing this ‘menu of responses’ and are grateful for the suggestions from consultation respondents which will help us to do this. We have listed a few examples below, but have not listed every single suggestion made:

  • Provide metrics on takedowns, censorship
  • Targeted education of influencers
  • Invest in burst capacity for influencers to reach diverse target audiences
  • Dissolve filter bubbles affecting at-risk populations by injecting content into an influencers network during major public health efforts to promote vaccination
  • Regulation and convening by governments in open societies to encourage platforms to share misinformation-related learnings and data with researchers and each other

Additional guidance documents that are needed

Respondents suggested that we should develop guidance documents to support users in certain aspects of the Framework. These include guidance on:

  • What level and type of evidence users should aim for to account for why they have chosen a particular level.
  • A clear set of indicators for moving between all levels, as well as metrics to help determine when an incident can be downgraded or deemed to be de-escalating.
  • A protocol for adapting and developing the Framework in anticipation of future changes to the information environment.

Issues outside of the Framework’s scope

Some respondents said that the Framework should explicitly consider how to improve or increase information literacy, and argue for the introduction of strong regulation. While these are important topics, this Framework is intended primarily to help governments, internet companies, media, civil society and others collaborate to respond efficiently and proportionately to information incidents and crises, rather than addressing the wide range of challenges present in the global information ecosystem. We welcome initiatives which engage with these other fundamental issues such as increasing internet company transparency and effective enforcement of policies, or improving evaluation of information literacy programmes.

Appendix

There were five responses from people who do not work for an organisation. Out of these five, three gave details of former employment in the UK National Health Service (NHS), local government and the medical charity sector. We are very grateful for these responses, as well as those we received from the following organisations.

  • US Agency for Global Media
  • Ranking Digital Rights
  • Duke Reporters' Lab
  • Internet Society India
  • Pagella Politica/Facta.news
  • Cognitive Security Collaborative Canada
  • FairVote UK
  • Tony Blair Institute
  • Faktoje.al
  • International Committee of the Red Cross
  • Center for Countering Digital Hate
  • Ofcom
  • Institute for Strategic Dialogue
  • Global Disinformation Index
  • Media Policy Project, LSE
  • Twitter
  • Facebook
  • BBC/Trusted News Initiative
  • YouTube/Google
  • Chequeado
  • UK Government (Department for Digital, Culture, Media and Sport)
  • MSI Reproductive Choices (formerly Marie Stopes International)
  • Logically.ai
  • Meedan

Glossary of terms

Influence operations
There are different interpretations of influence operations, but most encompass the following features: organised or coordinated efforts to manipulate or corrupt public debate or influence audiences for a strategic political or financial goal, often involving the perpetrator(s) concealing their identity via fake accounts or pages, and engaging in deceptive behavior.2
Information disorder and mis-/dis-/malinformation
  • Misinformation is when false information is shared, but no harm is meant.
  • Disinformation is when false information is knowingly shared to cause harm.
  • Malinformation is when genuine information is shared to cause harm, often by moving information designed to stay private into the public sphere.3
False narratives
This phrase is used differently in different contexts, but here we are using it to refer to stories that connect and explain a set of events or experiences, which are formulated through news reports or online posts in multiple places and contain multiple false, misleading or only partially-correct claims and contribute to an inaccurate picture of a topic, event, institution or group of people. Here the emphasis is on what people end up believing as well as what is intended by e.g. activists, politicians or coordinated campaigns strategically disseminating information.
Claim clusters
Clusters of claims that are related to each other, e.g. around a certain topic (such as Covid-19 vaccine side effects).

Footnotes

1 https://www.newsguardtech.com/superspreaders/

2 https://www.rand.org/topics/information-operations.html; https://carnegieendowment.org/2020/06/10/challenges-of-countering-influence-operations-pub-82031; https://about.fb.com/wp-content/uploads/2021/05/IO-Threat-Report-May-20-2021.pdf; https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c#page=17

3 https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c#page=17

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.