Using the Framework

Using the Framework components in practice is a five-step process. We have created template worksheets for those that want to use the Framework immediately.

Using the Framework

Step 1: Convene response group

Core participants are those who tackle misinformation and disseminate accurate information on a day-to-day basis, such as fact checkers, online monitoring groups, technology companies’ information policy teams, official information producers, media organisations, information literacy providers, regulators or government officials.

Consider what other actors the incident response could benefit from, such as frontline services, healthcare authorities, industry bodies, human rights and humanitarian organisations, or community and minority representation groups.

Step 2: Determine severity level and identify challenges

Use the severity levels criteria to establish how acute the information incident is. Not all the criteria will be visible in every situation. Disagreements about the severity level should not block the process, but instead prompt discussion and information sharing to enable understanding of differing views.

Identify the key challenges presented by the incident, and be specific. One to three challenges is enough - otherwise there is a risk of diluted focus and an overly complicated response plan.

Step 3: Determine aims and responses

Next, identify joint aims: again, it is best to keep this to a total of three to maintain focus and avoid confusion. Responses can be lifted directly from the Framework or adapted - users will likely have their own ideas about how best to mitigate the information incident. It is important to be clear about who is intending to do what, and by when.

Step 4: Setting evaluation goals and an initial time period

At this stage participants should think about setting evaluation criteria, such as key performance indicators, and an initial time period within which to execute responses before reconvening to review the effectiveness of the measures taken, and the incident’s progression.

Step 5: Execute responses

As each actor executes their responses, it might become obvious that certain responses  need adapting or changing completely: this should be communicated to others involved.

After the initial time period is up, participants should reconvene and report how they think the response has worked, whether it needs adjusting, etc. This is also a good point at which to reassess the severity level, in case this changes what responses are needed, or the incident is ending.

Case studies

Case study 1: stem rust crop fail (hypothetical)

At the end of an unusually hot summer, reports of stem rust, a serious yield-affecting disease caused by a fungus, begin appearing in Facebook groups for farmers in South East England. Soon, farmers unions declare an industry emergency: the disease has not appeared in such force since the 1950s. Theories begin to emerge online: environmentalist groups planted the disease; the government is trying to break the farmers’ unions; potatoes and rice will infect your garden. Several high profile politicians repeatedly claim that the government is manufacturing an emergency to make itself appear more electable in the spring elections.

The news is quickly filled with interviews of panic-stricken farmers begging the government to buy up fungicide to protect British crops. Front pages feature close-ups of wheat ears with lurid orange growths next to images of food made from UK grain. As #stemruststockpile starts trending on Twitter, hoaxes soon become reality and supermarket shelves are emptied of flour, bread, yeast and pasta. WhatsApp messages urge farmers and their allies to take to the streets and block roads into London.

Step 1

Recognising that an incident is occurring, representatives from internet companies,  government, civil society, farmers’ unions, scientists and news media meet.

Step 2

The group agrees that the situation is a Level 3 incident, because there are:

  • Multiple false claims or narratives on one or more platforms spreading fast
  • Several influential accounts/pages sharing false claims and narratives (or accounts with high reach in a specific community being targeted)
  • Evidence that information is affecting the behaviour of a growing number of people

The key challenges identified include:

  • Unclear or quickly changing situation, specifically:
  • Breaking news which reports unconfirmed information
  • Official advice is changing quickly or official sources backtrack
  • Undermining public order and safety, or frontline/aid workers, specifically:
  • False information creates potential for physical harm
  • Damaging behaviour by politicians or authorities, specifically:
  • High profile figures repeat false claims or make conflicting statements

Step 3

Seven potential aims are suggested by the Framework:

  1. Contextualise information and provide alternative trustworthy sources of information
  2. Communicate and debunk effectively
  3. Restrict spread of harmful information
  4. Disseminate accurate information appropriately to public and affected groups
  5. Strengthen policies and/or regulatory environment
  6. Share information and insight to gain a shared assessment of the situation
  7. Plan and coordinate with others

Two aims are chosen as joint priorities:

  1. Contextualise information and provide alternative trustworthy sources of information
  2. Restrict spread of harmful information

Meanwhile, internet companies suggest they will run a rapid review of existing content policies to see if there are any improvements to be made to help the situation: they agree to do this in consultation with civil society and the farming industry.

Under the first aim, contextualise information and provide alternative trustworthy sources of information , the group allocates responses among themselves:

Farming experts and fact checkers agree to work together to:

  • Provide plain language descriptions and glossaries
  • Provide information about the credibility of information, such as independence of producer, number of people surveyed, questions asked
  • Identify content where an information gap exists

Tech platforms agree to label information sources with extra information such as whether an information gap exists.

Under the second aim, restrict spread of harmful information , tech platforms agree to involve farming experts in a short term campaign to reduce circulation of relevant harmful false content

Step 4

Recognising that the incident is of relatively low severity, a consensus is reached that there should be limits on the time spent on these activities, and that there should be a review date in four weeks to check on the incident’s severity and decide whether to continue collaborating.

Step 5

Collaborators carry out their actions and meet after four weeks to discuss effectiveness of measures and whether the severity level has changed.

Case study 2: Covid-19 pandemic

While Covid-19 was first identified in December 2019, it was not until early February 2020 that the world realised that there was a significant incident underway. It took a further few weeks for organisations to realise the extent of the information crisis that was underway.

If the Framework had been created before the pandemic, the incident could have played out as follows.

Step 1

As governments around the world begin to implement measures to try and restrict the virus, it is clear that a serious information incident is occurring. Representatives from internet companies and government, civil society, health experts and news media meet to discuss.

Step 2

After deliberation and dialogue, the group decides this is a Level 5 incident. This is the first time that Level 5 has been triggered. The group recognises that:

  • There are multiple false claims or narratives spreading fast on all major platforms and new claims quickly gain traction
  • Numerous popular hashtags are being shared alongside false claims and narratives, and there are multiple high ranking search trends for keywords related to false claims
  • The incident is global, with the same false claims often appearing in different languages
  • Information has potential to cause significant harm to health and life, and to minority groups: conspiracy theories around the origin of the virus are growing in popularity and translating into anti-Chinese sentiment, with reported attacks on citizens of Asian heritage.
  • Response is likely to dominate activity for some time, and collaboration is at a maximum, including with organisations which do not focus on tackling misinformation.

The key challenges identified include:

  • Difficulty disseminating or communicating complex scientific information, specifically:
  • Low baseline knowledge of key issues among public, politicians and media
  • Information vacuums and uncertainty, specifically:
  • New information must be produced, leaving a temporary gap
  • An unclear and quickly changing situation, specifically:
  • Official advice is changing quickly or official sources backtrack
  • Lack of insight into type and scope of misinformation and/or movement of content between platforms

In addition, the danger of lasting long-term impacts and the pressure of needing to work at speed and at scale are recognised. Academics and civil society point out that there is also a risk of threat to freedom of speech from overzealous new content moderation policies taken in response to these challenges.

Step 3

Five potential aims are suggested by the Framework:

  1. Communicate and debunk effectively
  2. Contextualise information and provide alternative trustworthy sources of information
  3. Share information and insight to gain a shared assessment of the situation
  4. Support availability of reliable information from authoritative sources
  5. Disseminate accurate information appropriately to public and affected groups

Three aims are chosen as joint priorities:

  1. Share information and insight to gain a shared assessment of the situation
  2. Contextualise information and provide alternative trustworthy sources of information
  3. Disseminate accurate information appropriately to public and affected groups

Meanwhile, fact checkers and mainstream media organisations agree to cooperate among themselves to effectively debunk false claims, while government officials and information producers take on the responsibility of ensuring the availability of reliable information from authoritative sources.

The group agrees on the following responses, and it is recognised that there is a temporary high level of cooperation based on the fact that the information incident is unusually severe.

To achieve the first aim, the entire group agrees to share monitoring and verification information between trusted experts.

Tech platforms agree to:

  • Provide access to engagement, trends, and advertiser data to enable independent research on the impact of responses
  • Work with researchers and fact checkers to understand and share information about the  immediate effects of interventions

To achieve the second aim of contextualising information and providing alternative trustworthy sources of information, tech platforms volunteer to:

  • Apply warnings, pop-ups and labels (tech platforms)
  • Identify users which have shared misinformation multiple times previously
  • Work with researchers to identify certain groups that might benefit from seeing additional context  (e.g. crossover misinformation audiences)

And civil society, the mainstream media and government officials agree to identify content where an information gap exists and state whether this is being filled

For the final aim, on disseminating accurate information appropriately to public and affected groups, tech platforms agree to:

  • Expose consumers of high volumes of harmful content to counter messaging
  • Provide information hubs for particular trending topics with links to non partisan authoritative sources

And fact checkers, government officials and civil society agree to:

  • Work with trusted credible voices to reach certain audiences
  • Provide factual briefings and updates to the media

Step 4

Recognising that this is a major information incident with potential for harm, and that it may last some time, the group agrees to send written reports on effectiveness of measures every two weeks, and to meet monthly for the next quarter. It is agreed that responding to this incident is most organisations’ top priority, and therefore that these collaborative aims will be properly resourced and monitored.

Step 5

Collaborators carry out their actions, write their fortnightly reports and meet monthly for six months to discuss effectiveness of measures and whether the severity level has changed. After six months it is agreed that the incident is becoming more manageable and has de-escalated to Level 4.

 

Template worksheets

Collaborative online spreadsheet

Printable or offline-use spreadsheet

Full list of recommended responses

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.