Full Fact Report 2023

Informed citizens: Addressing bad information in a healthy democracy

About this report

Full Fact fights bad information. We do this in four main ways. We fact check claims made by politicians, public institutions, in the press and online. We then follow up on these, to stop and reduce the spread of specific claims. We campaign for systems changes to help make bad information rarer and less harmful, and advocate for higher standards in public debate.

This report explores how the online UK information environment can be improved to tackle bad information so citizens are better informed. It follows on from our 2022 report Tackling online misinformation in an open society—what law and regulation should do[1] and Part 4 can be read as an update of that in the context of what has happened with the Online Safety Bill. Our 2021 report, Fighting a pandemic needs good information,[2] considered how good information, communicated well, can benefit both individuals and society. Our 2020 report, Fighting the causes and consequences of bad information,[3] looked at the evidence we had built up over ten years’ of Full Fact’s work to address misinformation and the harms it poses to democratic society. Parts 1, 2 and 3 of this report can be seen as building on those earlier reports. This 2023 report is the fourth report that we are able to produce thanks to the support of the Nuffield Foundation.

Nuffield Foundation

The Nuffield Foundation is an independent charitable trust with a mission to advance social well-being. It funds research that informs social policy, primarily in Education, Welfare, and Justice. It also funds student programmes that provide opportunities for young people to develop skills in quantitative and scientific methods. The Nuffield Foundation is the founder and co-funder of the Nuffield Council on Bioethics, the Ada Lovelace Institute and the Nuffield Family Justice Observatory. The Foundation has funded this project, but the views expressed are those of the authors and not necessarily the Foundation.

This report was written by staff at Full Fact and the contents are the responsibility of the Chief Executive. They may or may not reflect the views of members of Full Fact’s cross-party Board of Trustees.

We would like to extend our warmest thanks to Anand Menon, Maeve Walsh, Poppy Wood, Ellen Judson, Alex Tait and Mark Franks for their comments on an earlier version of this report.

In addition, we thank our other supporters, our trustees and other volunteers of Full Fact. Full details of our funding are available on our website.

We would welcome any thoughts or comments to our Head of Policy and Advocacy, Glen Tarman, at glen.tarman@fullfact.org.

Summary

The next UK general election is now less than two years away. Candidates from all parties will ask millions of people for their votes, and their trust. But public faith in politics and politicians is low. Access to good, reliable information is under threat at a time the public needs it most.

Full Fact fights bad information and campaigns for higher standards from politicians, the media, and in our shared information landscape. Last year we published 624 fact checks and requested more than 180 corrections.

2022 was a damaging year for standards in public debate:

  • As many as 50 MPs, including two Prime Ministers, Cabinet and Shadow Cabinet Ministers, failed to correct false, unevidenced or misleading claims, despite repeated calls from Full Fact to do so
  • The statistics regulator had to write to the UK Government at least 10 times to challenge it on its use of statistics or other data
  • A false claim about employment statistics was repeated at least 9 times in Parliament by a sitting prime minister and has yet to be officially corrected, despite challenge by the Office for Statistics Regulation, UK Statistics Authority and the House of Commons Liaison Committee
  • The government’s Online Safety Bill rowed back on promises to address harmful misinformation and disinformation, and now fails to protect freedom of expression.

Last year Prime Minister Rishi Sunak said he wanted to ‘restore trust into politics’. Leader of the Opposition Keir Starmer said ‘trust has to be earned’. The latest public polling on faith in politics and politicians suggests neither has yet succeeded.[4]

But there are effective steps our elected representatives can take, now. Full Fact asks the same of every individual or organisation active in public debate: get your facts right, back up what you say with evidence, and correct your mistakes.

In our 2023 report, we show how these principles should be applied in every area of public life to improve trust, enforce high standards and improve our information landscape ahead of the next general election.

Honest politics

Ministers and government departments must provide evidence for what they say, and ensure that any statistics and data they rely on to back up their claims are provided publicly and responsibly.

This requires a strengthening of the Ministerial code, a culture change by both Ministers and government departments, and strong scrutiny by the statistics regulator and parliamentary committees.

Mistakes will always happen. But when they do they must be corrected quickly and transparently. It is important whether the claim is made on social media, or in a live broadcast, or in the House of Commons—40,000 people have joined Full Fact’s campaign to extend Parliament’s official corrections system to all MPs.

Bad information spreads rapidly unless it is clearly and prominently corrected.

Safeguarding the next election

Every voter deserves good information. That is a challenge as the information environment becomes increasingly fragmented and fast moving, and those who seek to influence our vote communicate false or misleading information.

Our democratic process is vulnerable. Ahead of the next UK general election we need to make sure it is protected.

This means better and more formalised scrutiny of the political parties election manifestos, and the proper regulation of electoral advertising. It will also require improvements to the rules around the transparency of campaign materials to prevent deceptive tricks such as disguising the provenance of electoral material, or masquerading it as something separate and independent like a local newspaper.

We must also recognise that modern elections now take place against the backdrop of a highly connected online environment in which election misinformation and disinformation can spread rapidly and at scale. We urgently need more robust arrangements for dealing with situations that could quickly threaten the integrity of an election in the UK, including by establishing a new Critical Election Incident Public Protocol and ensuring that internet platforms have adequate policies in place.

Tackling bad information online

The Online Safety Bill that is currently progressing through Parliament will not properly address harmful misinformation and disinformation, or protect our freedom of expression. The House of Lords must take urgent steps to address what is currently a missed opportunity.

Regardless of how the Bill ends up, it must not be seen as the end of the story for online regulation, but the beginning of a new and evolving system. The rapid emergence of new, accessible generative AI shows how quickly new challenges can arise that threaten our information landscape.

Our recommendations

  1. Government must evidence its claims: ministers and government departments must provide evidence for what they say.
  2. Government must use official information responsibly: ministers and government departments have a responsibility to be open and honest in their use of information, and must be held to account when they fail to do so.
  3. Fix the Parliamentary corrections system: MPs must agree new Parliamentary rules that make it easy to correct mistakes—and sanction those who don’t.
  4. Correcting claims beyond Parliament: politicians making false and misleading claims in public must make corrections and the media that air these claims should do more to address them.
  5. End bullshit manifestos: introduce better and more formalised scrutiny of election manifestos with political parties meeting higher standards in the presentation of their policy commitments.
  6. Reform electoral advertising: political parties should accept the need for accountability and move to independent oversight of their advertising practices.
  7. End deceptive campaign practices by political parties: parties must stop using misleading formats to gain votes, and new rules should be put in place.
  8. Protect electoral integrity, particularly in the online space: government, Parliament and other authorities must act in recognition that the UK does not have adequate protections for our elections.
  9. Ensure the Online Safety Bill tackles bad information: turnaround the Bill’s failure to properly address harmful online misinformation and disinformation.
  10. Tackle harmful health misinformation: government must prioritise addressing harmful health misinformation in online safety regulation and with a multifaceted set of responses and actors.
  11. Prioritise better online media literacy: help protect people from harmful bad information online by ensuring they have the skills and understanding to spot and deal with it.
  12. Make the future online regulatory framework work to address harmful misinformation: a proactive approach is needed to make the most out of the forthcoming regulatory framework while ensuring that it is improved to better address bad information in timely and effective ways.

Full Fact’s work is only possible thanks to the support of the thousands of individuals across the country. For updates and opportunities to take action against bad information, join us: fullfact.org/signup

Part 1: Getting facts right and backing up claims

False or misleading claims affect us all. As fact checkers we see first hand how bad information promotes hate, damages people’s health, and hurts democracy.

Our principles are simple. Anyone making serious claims in public debate should be prepared to:

  • Get their facts right
  • Back up what they say with evidence
  • Correct their mistakes.

This Part of the report provides some key observations on the first two of these principles based on our recent work. In particular it focuses on what we should expect from our Government and its Ministers. Above all, we should expect those in power to hold themselves to the highest standards.

Despite this, we too often see the Government, and particularly the Ministers within it, misuse official information to suit their argument or make claims which cannot be properly scrutinised or verified. In some cases this has continued even when highlighted to them by Full Fact or others.

Chapter 1: Government must evidence its claims

Ministers and government departments must provide evidence for what they say

Recommendation Ministers and government departments must provide evidence for what they say, and ensure that any statistics and data they rely on to back up their claims are provided publicly in accordance with the Code of Practice for Statistics or relevant guidance. This must be more clearly embedded in the Ministerial Code, and Parliament must demand better from Ministers that fail to do so.


Ministers and government departments must properly evidence their claims

Politicians want to make headlines, or win the argument in Parliament. Government departments and other public bodies want to show that they’re performing, or explain why they’re not. A common way to bolster a point is by quoting numbers that support it. Unfortunately, Government departments, and Government ministers in particular, are sometimes too quick to throw around numbers to support their claims and too slow to publish the important supporting or contextual data behind them.

When people in positions of power make a claim they must back it up with evidence so that what they say can be properly scrutinised and challenged. This means making that evidence available when the statement is made or as soon as possible afterwards. Without it, fact checkers, journalists, Parliamentarians and ultimately the public are kept in the dark, unable to scrutinise the claim or ask important questions about it.

Our fact checking over the past year has revealed a number of examples of this trend.

The then Home Secretary Priti Patel and the Immigration minister Robert Jenrick made unverifiable claims about small boat arrivals without making the data they appeared to be using available, even when asked (see further below).

Another example was when the Foreign Commonwealth & Development Office (FCDO) published a chart which purported to show the UK’s sanctions of Russian bank assets, compared to those by the US and EU.[5] The FCDO did not publish details about how these figures had been calculated or what they represented until several months later, despite us requesting this information at the time.

More recently, in January 2023 we wrote to the Prime Minister to ask for the source of a claim he made about the flow of patients through emergency care in Parliament. Ahead of publishing our fact check on this claim we contacted Number 10 four times, and the Department of Health and Social Care three times, but we were not provided with any data which supported what Mr Sunak said, despite asking for this.[6]

This is not the first time we have sought to call this sort of behaviour out. In our Full Fact Report 2020 we called for a culture of transparency and accuracy in government, in which all major policy announcements should include the evidence to back them up.[7] This has not yet been achieved.

We are also not the only organisation to identify this problem. In 2022 alone the Office for Statistics Regulation (OSR) had to write to Government departments at least ten times about the lack of transparency in their use of statistics. This included the OSR having to write to:

  • The Home Office about their public use of statistics on small boat crossings which are not already included as part of an existing publication or ad-hoc release (see below).
  • The Department for Work and Pensions about their use of unpublished management information to support claims about the number of people that were helped into work by the ‘Way to Work’ campaign.[8]
  • The Ministry of Justice about figures the Deputy Prime Minister used in a tweet about criminal barrister’s fees, which were only released weeks later (and even then buried in an impact assessment).[9]
  • The Department of Education about (amongst other issues) the use of data in a document supporting the Education White Paper - in which there were failures to identify the data used to produce various analysis in the document.[10]
  • The Department for Levelling Up, Housing and Communities about statements made about the success of the Homes for Ukraine Sponsorship Scheme without publishing further information to support or explain the statements.[11]
  • The Cabinet Office and the UK Health Security Agency about using an unpublished estimate of the cost of the Test and Trace programme in a press conference.[12]

The Code of Practice of Statistics is clear that Government statements that refer to official statistics should contain a prominent link to the source statistics, with the statements themselves meeting the basic standards for statistics, which includes accuracy, clarity and impartiality.

But these principles should not just be applied to the use of official statistics, but to all data and information that the Government uses to support its public statements. For most people the distinction between official statistics and other data is unclear. Public trust in official statistics is generally high,[13] and this risks people making assumptions about the reliability and trustworthiness of other information they are being given by producers of official statistics. Data which is quoted publicly should therefore be made available and communicated transparently.

This is a point supported by the OSR themselves who have issued Regulatory guidance on the transparent release and use of statistics and data[14] and on the voluntary application of the Code of Practice to data, statistics and analysis which are not official statistics.[15]

The OSR guidance is clear that organisations like government departments should seek to comply with the principles set out in the Code of Practice when making public statements that refer to data, regardless of the status of the data. This includes ensuring that data to support any public statement is published in advance or at the same time as the statement is made, with a clear explanation of strengths and limitations.

We appreciate that there may very occasionally be situations where this does not happen or was not possible. But the OSR guidance is clear here too: the information should be published as soon as possible after any statement has been made – ideally on the same day.

Addressing this means fostering a stronger culture of transparency within government departments and a willingness within Whitehall to push back against Ministers who want to selectively use information without making it available. That requires senior officials to set clear expectations and support staff in holding that line.

Small boat claims

During the summer of 2022 Full Fact fact scrutinised[16] claims made by the former Home Secretary Priti Patel, and more recently, the Immigration Minister, Robert Jenrick about the country of origin, and ages, of people arriving in the UK on small boats.

The Home Secretary Priti Patel told Parliament in September that over the summer the majority of arrivals in small boats from France—about 60%— were Albanian nationals but the Home Office failed to publish the data to back it up, even when asked to by Full Fact. When Full Fact eventually got hold of the relevant data, which was provided following a Freedom of Information request, it showed that Ms Patel’s claim was incorrect[17].

Similarly immigration Minister, Robert Jenrick made a claim, again in Parliament, about the true ages of asylum seekers arriving at Western Jet Foil asylum processing centre[18], but the Home Office did not publish the figures cited by Mr Jenrick. Again Full Fact asked the Home Office to publish the data Mr Jenrick's claim was based on which would allow us to assess whether what he said was accurate, but this had still not happened at the time of publication.

This led to the the Office for Statistics Regulation (OSR) Director General for Regulation writing[19] to the Home Office’s Permanent Secretary asking the Department to review its practices on immigration data and reminding it of the expectations around the use of data.

As the OSR rightly noted in their letter, “transparency supports public confidence and trust in statistics and the organisations that produce them and minimises the risk of misinterpretation”

Even more worrying is that this is not the first time that the OSR have had to write to the Home Office about this behaviour relating to small boat crossing. In fact it was the third letter making very similar points about the same matter in just over 12 months.[20],[21],[22]

Concerns about immigration regularly dominate headlines. It is a sensitive and emotive topic which requires accuracy and transparency. Instead, we’ve seen repeated, unevidenced claims that fail to show a commitment to transparency from our elected officials.

Action for government

Government departments and Ministers must provide evidence for what they say and be transparent about providing the data and information they rely on when they do. This must include adhering to the principles in the Code of Practice for Statistics and OSR guidance regardless of the status of the data they are using.

Permanent Secretaries and the Heads of Professions for Statistics should take the lead in fostering this culture within their departments. Paragraph 8.15 of the Ministerial Code should also be strengthened to make it clear that Ministers should adhere to the principles of the Code of Practice for Statistics for all data they use to back up statements they make (it presently only mentions official statistics).

Action for MPs

Parliamentarians should take Ministers and departments to task when they don’t back up their claims with transparent and accessible information.

The Public Administration and Constitutional Affairs Select Committee and relevant departmental select committees should take a more active role in scrutinising and holding ministers and government departments to account about the way they evidence their claims. To support this, each department’s annual report should report any concerns raised publicly by the OSR and set out the department's response.

Chapter 2: Government must use official information responsibly

Ministers and government departments have a responsibility to be open and honest in their use of information, and must be held to account when they fail to do so.

Recommendation Ministers and their Government departments must use statistics and data more transparently and responsibly, and quickly rectify misleading claims when they occur. Those who hold them to account, such as the statistics regulator and parliamentary committees, must do so quickly, robustly and publicly.


Use of official information must not be misleading

The issues do not stop at failing to evidence claims, or using unpublished information that can’t be scrutinised (see Chapter 1). Another type of misleading behaviour is misrepresenting official information.

Statistics on their own have limitations. The way they are presented is a crucial part of how they are interpreted and understood by the public. If data is presented without context or caveats, it can give an incomplete or misleading picture.

This can happen in different ways: information can be presented to give a misleading picture of what the statistics actually show; they can be described incorrectly or they can be given too much weight. It can happen accidentally or, in some cases, knowingly.

Producers of official information have a responsibility to ensure that the information they publish is presented clearly so as to reduce the risk that others misinterpret or misrepresent them. As the Office for Statistics Regulation (OSR) states in their Regulatory Guidance, selective use of data or use of data without appropriate context can lead to misuse which damages public trust.[23]

But even more crucially, these producers of official statistics - such as Government departments and the Ministers that lead them - must ensure that they do not fall into these same traps themselves. And if they do they must act to rectify the issue.

Unfortunately, Full Fact too often sees examples where this does not happen, with Ministers or their departments representing their own official information in misleading ways. In some cases this is simply done in error, but in others it appears to be in pursuit of political advantage.

A classic example of the problem is the use of numbers in a way designed to spin a more positive light on levels of government funding. For example, in March 2022, Defence Secretary Ben Wallace claimed that he had secured £24 billion of additional defence funding[24]. In fact this figure represented the extra budget the Ministry of Defence would receive over a four-year period compared to the 2020/21 budget, not the increase in the annual budget over that period (which was £6.2 billion in cash terms—less in real terms when you take into account inflation). This appeared to mirror the way the Treasury had described it during the spending review in 2020. But, as was pointed out at the time, this was also misleading because it is not how such figures are generally presented (which is to use the increase in annual spending) and risks giving the impression that the annual defence budget had been increased by £24 billion.

This is not the first time we have called out this practice of ‘rolling up’ years of annual spending increases to give a higher sounding cumulative figure. For example we flagged similar concerns about claims about increases to NHS funding in 2018[25] and for schools in 2020[26].

Another common issue is presenting spending figures without taking into account inflation (also a problem with the examples above) to give the impression that budget increases are larger than they are in real terms. A recent example of this was Home Secretary Priti Patel[27] and the official Home Office Twitter account[28] tweeting a graph showing what appears to be a significant increase in police funding from 2015/16 to 2022/23, alongside an announcement that funding would increase by £1.1 billion to £16.9 billion in 2022/23.[29] In fact, once adjusted for inflation the actual increase in police spending was much lower. This sort of approach is clearly misleading if it is not explained properly, and prompted rebuke from the OSR, whose letter also pointed out that this was not the first time misleading language about police funding by the Home Office had been a cause for concern.[30]

The Code of Practice for Statistics requires those producing statistics to ensure, among other things, that the methods used are clear; that any limitations are identified and explained; and that they are presented in such a way that they can be understood by all types of users.

Producers of statistics must also make clear what judgements have been made about the data and methods. This includes any limitations or changes to the methodology, as this may affect the results or make comparisons with previous years more complicated. As the code of practice states, “these explanations are as important as the numbers themselves”.[31]

These principles of transparency should be applied, not just to official statistics covered by the Code, but more generally to the government's communication of statistics, data and wider research, whatever its status.[32]

The misleading use of statistics by the government is even more problematic when the error is pointed out, but the same behaviour is then repeated.

In late 2021 and early 2022 the then Prime Minister Boris Johnson repeatedly made misleading statements about the number of people in work compared with before the pandemic. In doing so the Prime Minister appeared to be selectively using figures on the number of payrolled employees to make a wider claim about the number of people in work being higher than before the pandemic. In fact the concept of employment includes people other than payrolled employees, including the self-employed, meaning that the number of people in paid work at the time was actually below the level seen just prior to the pandemic.

Getting these figures confused or using them wrongly is perhaps understandable in a one off scenario. Repeatedly doing the same thing again and again even after the error has been pointed out is entirely different.

In the case of Mr Johnson’s use of employment figures, the then Prime Minister continued to make this claim a number of times, in and out of Parliament, after Full Fact wrote to him about the issues with it.[33]. Even more concerning was that he did so after being warned by official bodies. Following Full Fact’s intervention[34], the Director General of the OSR wrote to 10 Downing Street calling the Prime Minister’s use of the statistics “disappointing”.[35] The Chair of the UK Statistics Authority then also later wrote to Mr Johnson, saying his statements were “likely to give a misleading impression of trends in the labour market”.[36] The matter was then taken up with the Prime Minister by the House of Commons Liaison Committee.[37]

Mr Johnson failed to correct the official record on any of these occasions, and the claims were also subsequently repeated by other MPs.

Ministers and Government departments must take care to avoid misleading use of their own figures and react positively to correct themselves and ensure that the error is not repeated. Failing that fast and robust action from the regulator is required and must result in a swift resolution by the department in question.

Public admonishment—especially one that receives press coverage—should in theory result in contrition and a shift in attitudes. Unfortunately, as we can see from the examples here and in Chapter 1 above, interventions from the OSR sometimes seem to have limited efficacy when it comes to changing behaviours. Their interventions, if made swiftly, can be valuable, but they are undermined if there is insufficient will within departments to make a cultural change or take a more robust position with Ministers, or insufficient commitment from Ministers to the standards they have signed up to.

Action for government

Government Ministers and their departments must take care to avoid the misleading use of data and statistics by carefully following the Code of Practice and wider OSR guidance so as to ensure transparency. Particular attention should be given to the misleading presentation of spending figures, including failing to properly account for inflation or explain the basis of the figures.

Government Ministers and their departments should react positively to correct themselves when challenged, and ensure that errors are not repeated.

Government departments must ensure that staff, including those in communication roles, are properly trained so that they understand these expectations and have the skills to meet them.

Action for Parliament

Parliament, and particularly the relevant parliamentary committees, must robustly challenge ministers over the misleading use of figures when it arises, and call on them to correct the record. Parliament must establish more effective systems to hold ministers to account when they persistently fail to comply with their duty to correct inaccuracies.

Action for regulators

The OSR must continue to act swiftly and publicly to call out non compliance with the Code of Practice or guidance on the transparent use of statistics and data.

Part 2: Correcting mistakes

Correcting mistakes is a fundamental part of ensuring honesty in public life and the third element of the core principles that Full Fact believes anyone making serious claims in public debate should adhere to.

It is also the reason why we at Full Fact don’t just publish fact checks, we follow up on them.[38] By asking people to correct the record when they get things wrong, we can stop and reduce the spread of bad information. This is even more important when those mistakes are made by politicians, for whom honesty should be a core principle of their conduct in public life.

This Part of the report focuses on two distinct areas when it comes to correcting mistakes: corrections made inside Parliament and those made outside.

In Chapter 3 we look at the correction of claims made within Parliament, highlighting the importance of back bench MPs and Ministers correcting their mistakes, focussing in particular on the need for the House of Commons to agree new rules that make it easier for backbench MPs (who do not have the correction process available to Ministers) to correct the official record.

In Chapter 4 we focus on the situation outside in Parliament, discussing how false and misleading made by politicians in public must be addressed both directly by the person who has made them and by any media organisations that might have carried the claim.

Chapter 3: Fix the Parliamentary corrections system

MPs must agree new Parliamentary rules that make it easy to correct mistakes—and sanction those who don’t.

Recommendation The Procedure Committee should make recommendations as part of its inquiry on Correcting the Record that will allow all MPs to correct the official record when they make mistakes. This must be accepted and adopted by the Government. MPs should agree to such a change in the system. Government Ministers are able to correct their mistakes on the record, but not all of them do so. The Prime Minister and his Government must ensure this happens. Solutions on how to tackle persistent failings by Ministers and MPs to correct their mistakes must be taken forward.


Ministers should always correct the record—right now they do not

In 2007 a new parliamentary process was introduced to allow Ministers to correct the official record when they make any inadvertent errors in speaking.[39] Since then we have seen one or two ministerial corrections published every sitting day.[40]

However, not all Ministers are using the corrections process as they are meant to. At Full Fact we find that many Ministers and their departments are unwilling to engage with correcting the record.

This goes right to the top of the Government. In 2022, Full Fact fact checked numerous statements made by former Prime Minister Boris Johnson MP in Parliament— on Sir Keir Starmer’s Brexit voting record,[41] on energy bills,[42] on vaccine rates,[43] and the inaccurate claim repeated multiple times in Parliament that employment is going up when it is going down.[44]

This employment claim was challenged, not only by Full Fact, but also the UK Statistics Authority, the Office for Statistics Regulation, and the Liaison Committee. Though later Mr Johnson acknowledged what he had said was not true, he has never corrected the official record on this matter.[45]

This theme has continued in 2023, with current Prime Minister Rishi Sunak MP making unevidenced claims in the House of Commons on how the Labour Party is funded,[46] and not taking any steps to correct or back up this claim.

The Government states that it enforces standards through the Ministerial Code, for accurate and truthful information to be given to Parliament with Ministers expected to correct any inadvertent errors at the earliest opportunity.

But the system does not work when it is not taken seriously by Ministers. The persistent failure of the former and current Prime Minister and other Ministers to correct the record when they are required to do so creates not just a problem of their own behaviour, but a crisis of parliamentary accountability.

MPs need the ability to correct the record—right now they do not

Ministers not correcting their mistakes is only part of the problem—the majority of MPs are unable to correct their mistakes on the official record.

MPs have to rely on making Points of Order, which is not only an inefficient use of House time and encourages political point-scoring, but the correction does not cross-reference to the original statement made in Hansard.

This leaves senior high-profile backbenchers, and the Shadow Frontbench, who have a formal role in holding the Government to account, unable to correct the record. Given the visibility and reach of prominent MPs like this, their comments have the potential for false claims to spread far beyond the House of Commons chamber.

During Prime Minister’s Questions on 20 April 2022, the Leader of the Opposition, Sir Keir Starmer MP, mistakenly claimed that the Prime Minister had criticised the BBC for their comments on and coverage of Ukraine. Although a Point of Order was made the next day to withdraw the comment, the original record still shows the false claim uncorrected.

We cannot have honest political debate while the official record remains littered with false or misleading claims by elected representatives.

MPs have an obligation to uphold standards of Honesty as set out in the Member’s Code of Conduct.[47] A new corrections system in the House of Commons would better enable MPs to fulfil this responsibility and increase constituents’ faith in their MPs.

Uncorrected mistakes in Parliament affect public debate—this must be addressed

The visibility and searchability of Hansard has increased since the introduction of the Ministerial corrections process. It is easier now than ever for the public to view debates and statements in Parliament and share these online. Currently it is not clear to the public when and how the official record is corrected.

Because MP corrections via a Points of Order aren’t cross-referenced, this can lead to potentially dangerous misinformation remaining on the official record. This can be seen in Sir Desmond Swayne’s intervention on 14 December 2021, where he claimed that more people were dying in the carnage on the roads than of Covid-19.[48] Sir Desmond used a Point of Order on 8 February 2022 to acknowledge that this was incorrect.[49] But the mistake remains uncorrected in Hansard.

The potential for misleading statements made in Parliament to spread and fuel misinformation and disinformation is high. We see claims similar to ones occasionally made by MPs being used in health disinformation by bad actors, with the authoritative parliamentary website giving false claims legitimacy.

Ensuring statements made by politicians are not taken out of context or used to mislead the public is vital. This will only become more important as people are exposed to more sources of information online that may appear credible without being trustworthy. This is particularly important during election periods.

MPs should act on corrections to respond to public concern on honesty and accountability in Parliament

Public concern around the standards and accountability of MPs has increased in recent years. In 2021 Full Fact found that 71% of Britons believed there was more lying and misuse of facts in politics and media than 30 years ago.[50]

The February 2023 Ipsos Issues Index shows that lack of faith in politics, politicians and Government remains one of the most important issues facing Britain today:[51] persistently a top 5 concern, and in the past year at the highest level since the issue first appeared in the Index in 2016.[52]

A 2022 Compassion in Politics petition received over 200,000 signatures to bring in a criminal offence covering politicians who lie.[53]

In 2021, the Committee on Standards in Public Life set out that perceptions such as those can be a sign of a long-term deterioration of confidence in British politics and indicate a troubling disconnect between the standards the public expects of its elected leaders and the standards they perceive.[54]

Full Fact knows that MPs are more honest than these figures would have us believe, but the actions of a few are damaging the reputations of all MPs and political parties.

The public are demanding more accountability and transparency of their MPs. Full Fact believes that a new system which allows all MPs to easily correct the record, and a system to hold those who continuously fail to correct their mistakes to account, would help to improve the perception that Parliament is not truthful.

A model for parliamentary corrections already exists and should be adopted

Full Fact is calling for the corrections process available to Ministers to be extended to all MPs; for corrections to appear alongside the original text in Hansard; and for better signposting and greater accessibility on the parliamentary website.

This new system is not about reworking or recontextualising an argument, but about correcting a factual error. Full Fact believes this would be a balanced, impartial, and non-partisan way of ensuring honest and accountable debate.

The Scottish parliamentary system gives clear guidance on how this could work. Holyrood introduced a similar system for MSPs in 2010, and this has become an everyday part of fulfilling their responsibilities as an elected representative.

Parliament also needs to take seriously what happens when Ministers continuously fail to correct the record when they make a mistake.

Full Fact believes there are existing processes of the House that could be used to ensure corrections take place. This includes a greater role for the Speaker to refer this pattern of behaviour to the Commissioner for Standards for investigation.

A truly honest and accountable Parliament should have such a system in place.

Action for Parliament

The Procedure Committee should recommend extending the corrections process to all MPs as part of its inquiry on Correcting the Record[55],[56] and recommend a process to tackle consistent and egregious failings by Ministers to correct the record.

Action for the Government

Ministers must use the system available to them to correct their mistakes. This needs to be enforced by strong leadership from the Prime Minister and his Government.

The Government should accept and adopt any recommendations made by the Procedure Committee that will allow for MPs to correct the official record, and that will address persistent failings by Ministers to correct their mistakes.

Action for MPs

MPs should show leadership in Parliament by both correcting mistakes made in the House of Commons, and by asking their colleagues to do the same.

MPs should agree to new rules when proposed by the Procedure Committee that enable them to correct mistakes on the official record and any additional processes that will address serious failures to correct false and misleading claims in Parliament.

Chapter 4: Correcting claims beyond Parliament

Politicians making false and misleading claims in public must make corrections and the media that air these claims should do more to address them.

Recommendation Outside Parliament MPs should make a correction on social media and in quick follow up to live broadcasts. Relevant committees should commit to future inquiries on false and misleading claims made by MPs outside of Parliament. Broadcasters should review their policies and practices on dealing with false or misleading claims made by politicians. Political parties should review their policy and practice in relation to claims and ensure they are corrected when needed.


MPs are public representatives and their public statements must be treated accordingly

When an MP makes a contribution in a public setting, such as on social media, on television, in a newspaper, or in a public setting outside of Parliament, they are speaking in a public forum, engaging in public debate, and making a statement in their capacity as a public representative.

Full Fact often fact checks false or misleading claims made by politicians outside of Parliament. We see the extent to which an inaccurate social media post by a high profile MP with hundreds of thousands of followers, or a misleading claim made during a widely viewed broadcast interview, can reach and inform public debate. The reach of a false or misleading claim made outside of Parliament can be far greater than one made inside the House of Commons.

It is therefore very important that MPs uphold the principle of being honest by taking individual and collective responsibility for improving the correction of false and misleading claims outside of Parliament. Steps should also be taken by others to ensure claims made outside of Parliament are corrected consistently.

Parliamentarians should assume responsibility for improving the correction of false and misleading claims outside of Parliament

When politicians ignore requests to correct false and misleading claims made in public forums, Full Fact and others seeking corrections from politicians have limited ways of escalating correction requests to ensure false claims are addressed. This is a matter for individual and collective action. The current processes to tackle mistakes outside of Parliament do not work. Parliament should look at this again and consider how a corrections process could work for inaccuracies made outside of Parliament. Exploring this should not stop a system for correcting mistakes by MPs in Parliament being put in place as soon as possible (as outlined in Chapter 3).

Full Fact recognises that the Members’ Code of Conduct does not seek to regulate what MPs do in their purely private and personal lives, nor does it seek to regulate MPs’ views and opinions. However, when MPs make statements they do so in their capacity as public representatives. They should be subject to the Seven Principles of Public Life, including the principle of being Honest.

A system with Parliament that addresses inaccuracies made outside of Parliament would have complexities, but upholding truth and accuracy in political discourse is vital.

MPs must take responsibility for correcting their mistakes on social media

Social media sites provide a platform for MPs to speak directly to their audience, in their own words, with some publishing or sharing content several times each day. Many use their social media pages to share views on government policies and decisions, to criticise individuals they disagree with and to promote their political party’s agenda.

A post from an MP can reach tens or hundreds of thousands of people online with the click of a button. This has many benefits for public life, but it also means that a false or misleading claim made by a politician on social media can reach a very large audience in a very short space of time, with the risk of their constituents and a wider public being misled.

MPs must take responsibility for correcting their mistakes on social media in a timely and transparent manner to prevent the spread of bad information online, but, in too many cases, this is not currently happening.

The Commissioner for Standards told the Standards Committee that a high proportion of complaints she receives from members of the public relate to MPs’ tweets and other uses of social media or the internet, on the basis that they allegedly contain abusive or disrespectful language or errors of fact, exaggerations or downright lies.[57]

Example 1

In January 2022, we contacted Andrew Bridgen MP to ask him to correct several social media posts where he falsely described the Covid-19 mRNA vaccines as “gene therapy”.[58] He made at least eight references to gene therapy or therapies on Twitter in regard to the mRNA vaccines in the first two weeks of 2023. At least one of these posts received over one million views. Mr Bridgen did not respond to our correction request and has not corrected these tweets meaning this harmful vaccine misinformation is still available online for anyone to see.

Example 2

In November 2022, we contacted Karl Turner MP to make him aware that a tweet he had posted wrongly claimed £37 billion had been spent on the Test and Trace app.[59] Following this, Mr Turner tweeted a correction clarifying that £37 billion was the total budget for Test and Trace, not just what was spent on the app. In his tweet he also stated that his comments “were not a deliberate attempt to mislead and which I am glad to correct”.[60] Mr Turner did the right thing by correcting his mistake but his original tweet, which was not deleted, had more than 3,000 retweets while the correction had less than 20, demonstrating how widely an inaccurate claim can spread and proving why corrections need to be made as quickly as possible. Mr Turner’s original tweet has subsequently had a community note added to it, providing additional context and linking to our fact check.

Broadcast media must do better on false and misleading claims by politicians

Politicians give TV and radio interviews on a daily basis and often they are under pressure to answer the questions they are being asked on the spot, without having time to check their facts. Often presenters and interviewers are not prepared to interrogate claims by politicians in real-time. This results in many false and misleading claims being made by politicians on broadcast media, which are often unchallenged by reporters or presenters at the time.

Clips of broadcast interviews are frequently widely shared online, which means a false or misleading claim made by a politician during an interview has the potential to reach a far larger audience than just those who watched or listened to the programme. This reinforces the need for broadcasters to challenge politicians in the moment, where possible.

Broadcasters have an obligation to ensure they do not become vectors of misinformation, and to have clear and consistent ways of addressing false claims on their platforms.

It is also the responsibility of MPs to do all that they can to stop anyone else from being misled by a false or misleading claim they made and to undertake best efforts to ensure anyone who already heard their claim is made aware that it has subsequently been corrected.

Example 1

In October 2022, we fact checked an incorrect claim made by Cabinet minister Nadhim Zahawi during an interview on the BBC’s Sunday with Laura Kuenssberg.[61] Mr Zahawi wrongly claimed that the Moderna booster vaccine protects against both Covid-19 and flu. After we contacted Mr Zahawi’s office regarding this claim, he posted a correction on Twitter.[62]

We subsequently contacted the BBC to make them aware of Mr Zahawi’s correction on Twitter to ask that they also issue a correction. In response to our request, the BBC published a note about this on its corrections and clarifications page.[63] The BBC told us that senior editors on Sunday with Laura Kuenssberg were shown our complaint and that our points were included in their overnight report, which is (according to them) one of the most widely read sources of feedback at the BBC, and helps inform future editorial judgements. However, the programme remains available on BBC iPlayer without a correction[64] so anyone watching the programme online would not be aware that what Mr Zahawi said about vaccines was wrong.

Example 2

In September 2022 we fact checked an incorrect claim made by then Prime Minister Liz Truss about the amount households will pay on their energy bills under the government’s energy support package, during an appearance on CNN.[65]

We wrote to Ms Truss to ask her to acknowledge the error and to ensure she described the policy accurately in future. However, on 29 September Ms Truss conducted several interviews with BBC local radio stations and in some of these she made the same mistake, despite describing the policy more accurately in others.[66]

Her incorrect claim was widely repeated in the media and across social media, demonstrating how misinformation can spread if politicians repeat false claims without being challenged at the time.[67]

This happened at a time when there was widespread confusion about how the Energy Price Guarantee would work. Research carried out by Opinium showed that almost two in five households (38%) wrongly believed that the Government’s energy price guarantee means their bills could not go above £2,500.[68] This highlights the importance of providing the public with accurate information on this subject at the time.

We wrote to Ms Truss on 29 September asking her to issue a correction but she did not do this. It wasn’t until an interview with Nick Ferrari on LBC on 4 October, that Ms Truss explained the policy accurately and accepted she had got it wrong in previous radio interviews. A correction was not issued by Ms Truss or the BBC[69] so viewers who listened to the interviews where she got this wrong, but not the LBC interview, would be unlikely to know that what she said on the earlier occasions was incorrect.

Action for MPs

If an MP makes a false or misleading claim on social media, they should correct this quickly in a clear and transparent manner.

If an MP makes a false or misleading claim on broadcast media they should take responsibility for ensuring it is appropriately corrected, and make efforts to ensure the correction is publicly available to anyone who might have heard the claim, eg. by issuing a correction on social media or, if a Minister, publishing a note on the government website, and by ensuring the broadcaster is made aware of their error.

Action for Parliament

The House of Commons Procedure Committee, and other relevant committees according to their remit, including the Standards Committee, should each commit to a future inquiry on false and misleading claims made outside of Parliament and the need for correction.

Action for broadcast media

Broadcasters should review how they can become vectors of misinformation and take action to minimise the possibility of this happening.

Broadcasters should ensure presenters are appropriately briefed on guests ahead of interviews so, as far as possible, they are well equipped to challenge false or misleading claims made by politicians immediately.

Regardless of who made a false or misleading claim, broadcasters should take steps to correct it, if it was made on their platform (including with clear labelling and corrections features for listen or watch again services).

Action for political parties

Political parties should review their policy and practice in relation to corrections beyond Parliament, including where it becomes clear that party lines are false, misleading or missing important context and are being repeated by MPs outside the House of Commons, including on broadcast media.

Part 3: Addressing bad information to protect democracy

Honesty and transparency is never more important than during an election campaign, when the votes that will shape how and by whom we are governed will be cast.

This Part of the report looks at some of the key issues that arise when politicians and parties seek to influence our vote, and calls for improvements that will help protect and improve this vital part of the democratic process.

Chapter 5 sets out the inadequacies with the lack of transparency around promises in party manifestos and calls for further action to address it, including support for a new body to help cost and scrutinise these important election commitments.

Chapter 6 highlights the lack of regulation of political advertising in the UK and calls upon the leaders of the major political parties to commit to having their advertising independently regulated.

Chapter 7 shines a light on the sorts of deceptive campaign practices that we regularly see in political campaigning, such as campaign leaflets masquerading as local newspapers. We call for measures to stop this, and for stronger requirements about the visibility and legibility of imprints on campaign material, both offline and online.

Finally, Chapter 8 highlights how we need better protection for our electoral processes given the increasing threats that can emerge in a highly connected online environment in which election misinformation and disinformation can now spread rapidly and at scale. Doing this effectively will require better regulatory provisions as well as collaborative responses from regulators, platforms and civil society.

Chapter 5: End bullshit manifestos

Introduce better and more formalised scrutiny of election manifestos with political parties meeting higher standards in the presentation of their policy commitments

Recommendation Political parties should commit to setting their manifestos out in ways that allow meaningful scrutiny and audit of their pledges. This should pave the way for the parties’ spending commitments to be subject to a formal standing mechanism for scrutiny through the Office for Budget Responsibility (OBR) or a dedicated new body. Media, civil society and others should look to improve the ways they scrutinise manifestos so that voters are better informed.


Make election manifestos matter as they should

As the UK Parliament website puts it, a manifesto is ‘a publication issued by a political party before a General Election. It contains the set of policies that the party stands for and would wish to implement if elected to govern’.[70]

The UK public deserves good information before making a choice, and no more so than in elections. In a general election, a huge amount of information circulates. Manifestos matter because they anchor the debate and provide a set of promises about what a party will do if it forms the next government. A government is judged in part by whether it delivers on its manifesto pledges.

Manifestos have a quasi-constitutional significance. The Salisbury Convention[71] ensures that a party gaining a majority in the House of Commons will not have its programme for government blocked by the House of Lords where it is in its manifesto.

A party’s manifesto being published is an important moment in a general election campaign. Not only is it part of a broader effort to secure credibility and votes, in setting out the agenda for government should that party win office, it forms part of a de facto compact with the electorate.

Improve independent scrutiny of manifestos and support to citizens

Given that the number of people that read the manifestos is low and that voters rely on summaries provided by intermediaries and the media, improvements in both the standards to which parties produce and present manifestos, and to the ecosystem that communicates what they contain to audiences and stakeholders, could help citizens be more informed when they vote.

​​Better independent scrutiny of manifestos is required. This means parties producing manifestos in ways that allow greater scrutiny and a wider set of actors operating to provide scrutiny, including by working together in the lead up to an election where appropriate. Scrutiny of this sort now typically produces long and complex outputs that will not be used by or useful to most voters. Expert bodies urgently need to develop pithy, shareable outputs that are capable of cutting through during an election campaign.

With spending commitments and financing being a critical aspect of what parties set out, there is a case for bodies such as the Office for Budgetary Responsibility (OBR) and the Institute for Fiscal Studies being given a role to produce detailed costings and assessments of each party’s manifesto so as to aid public debate (see below).

Once a party publishes its manifesto, its political opponents often ‘cost’ it, providing their own assessments of how much its pledges will cost (and the associated tax burden). In this atmosphere of claim and counterclaim, an independent assessment is critical and that in turn depends on clarity from the parties themselves.

Elevating effective independent scrutiny of manifestos, both on costings and assessments of the policies each party is making, will improve public debate. Voters deserve that so that they can consider what information they want and need to make an informed choice.

Lessons should be learnt from previous elections to make sure manifestos are able to be better scrutinised

​​Full Fact has worked on three UK general elections (2015, 2017, 2019) as well as fact checking claims by politicians and political parties for well over a decade outside of election periods. In our experience, there are significant lessons to be extracted from previous elections.

Party manifestos include a huge number of claims. Full Fact uses AI to assess how many as part of our fact checking process. In 2019, our tools identified 909 claims in the Conservative’s manifesto and 2299 in the Labour manifesto[72]. The sheer number of claims makes scrutiny a challenge for those providing such a service to the public. This challenge is greater when manifestos are published late in a campaign.

Parties operate in an environment where there are disincentives for early and fuller release of commitments that can be properly scrutinised (such as minimising space for their political opponents) and this can lead to overreliance on narrative and story at the expense of reality-based promises. Some commitments will inevitably always later come up against delivery challenges. However, others are evidently unfeasible, and can be called out as such.

In the last general election Full Fact scrutinised each of the main parties’ manifestos, getting into the detail behind the biggest claims and campaign pledges. We published in-depth analysis of the Conservative,[73] Labour,[74] Liberal Democrat[75] and SNP[76]manifestos.

What Full Fact has seen in the last and previous elections includes manifestos and policy claims being set out in ways that are not clear or meaningful. For example:

  • Using vague or technical language to overstate what a promise would mean in practice[77].
  • Using numbers in a misleading way by omitting important explanation about how they have been calculated[78].
  • Leaving out important context which makes a claim misleading[79][80].

Meaningful manifestos would not replicate these kinds of past failures.

The parties’ manifestos offer not just different plans for the future, but different views of the present and past. Not all of those views can be right. That’s why we believe the work of Full Fact and other fact checkers is so important. We recognise too that a wide ecosystem of good faith actors is needed both to encourage parties to make commitments that can be properly scrutinised, and then scrutinise them when they are released.

In one particular area, special attention and arrangements are required, and that is the costings of the policies in manifestos and their associated tax and spending commitments.

The Institute for Fiscal Studies looked at this specifically in 2017, stating that ‘the shame of the two big parties’ manifestos is that neither sets out an honest set of choices’[81]. In every major area of public spending and policy, independent experts give the same warnings.

A lot of important work in this area already happens. For example, the economic research organisations such as the Institute for Fiscal Studies, the National Institute of Economic and Social Research and the Resolution Foundation regularly scrutinise claims made by political parties in their manifestos. However, it is unlikely that this ecosystem, valuable as it is, would be able to take on the role of providing a single consistent service costing proposals for all parties equally and independently, particularly if that is to involve removing the significant advantage that the party in power currently enjoys by virtue of having access to the civil servants and the extensive information held within government.

We believe this challenge lends itself to having a single recognised independent body with the remit, resources and access to data for the job. Something that has been in place for example in the Netherlands since the 1980s in the form of the Netherlands Bureau for Economic Analysis[82].

Give the OBR a role in scrutiny of party manifestos

The Budget Responsibility and National Audit Act 2011[83] established the OBR on a statutory basis, following its creation after the 2010 General Election. Its core purpose is to provide independent and authoritative analysis of the UK’s public finances.

There are many considerations to make in the OBR performing a similar role in relation to political parties’ manifestos. Among these is the nature of the OBR itself, namely that:

  • It is a non departmental public body sponsored by the Treasury rather than a Parliamentary appointed body, like the NAO or Electoral Commission.
  • Its staff are civil servants and much of the work on its forecasts is also done by other civil servants working in government departments.
  • It is directly funded by HM Treasury and in some aspects works in response to its direction.
  • It is led by members of the Budget Responsibility Committee (BRC) which is appointed by - and can be removed by - the Chancellor (albeit subject to approval by the Treasury Select Committee).

Taking on a role around manifestos, inherently part of the political process, and ensuring the ability to act independently in doing so, would require suitable arrangements that take this into account. Full Fact sees that two basic approaches are viable, both of which would likely require legislation to change the remit and functions of the OBR:

Option 1: The OBR independently audits the manifestos of political parties

The output is a public assessment of the costings of the policy proposals contained in the manifestos rather than private advice to the party to help with the formation of their policies. The OBR would be able to do these audits, and publish them, irrespective of whether they are asked to do so by a political party.

Given the current status of the OBR, this option does have challenges as it would require a government-funded non-departmental public body staffed by civil servants to interpose itself in the democratic political process. It should be acknowledged that it does potentially vest significant power in the hands of unelected officials, and risks embedding particular institutional views in the political policy-making process. Legislation could be passed to address some of these issues, but a cross party consensus would be a key requirement (and incentives in place to maintain consensus and even a mechanism should that break down in future).

Option 2: The OBR offers a service to the political parties which enables them to consult with and get advice from the OBR when drawing up their manifestos

There would be a further consideration of whether the OBR would then also provide a public forecast - but the whole process would be optional and the OBR would not be involved without being asked to do so by the political party. Any compulsion to take part would therefore be purely political.

This second option presents fewer challenges in terms of the appropriateness of the OBR playing this role and is more in line with the potential role envisaged by the Institute for Government.[84] This approach would likely require changes to the way the OBR is staffed/resourced and potentially other changes to ensure issues of impartiality do not arise.[85] As well as assisting with transparency, this option would have the advantage of levelling the playing field by giving opposition parties access to expertise and resources that are normally only available to the governing party (notwithstanding the limited option of ‘access talks’ with the civil service that are usually available to opposition parties).

A third option would be that a new body be created or appointed specifically to audit manifestos through legislation. This would allow for it to be appointed, resourced and governed in such a way as to avoid conflicts.

A fourth option would be that political parties commit to manifesto audits voluntarily, using a mutually agreed body or panel of experts.

Whichever option best suits the UK will depend on a level of consensus within and without political parties. Full Fact is urging that debate begins now so that a viable model can be taken forward. We presently see the option (2, above) of an OBR service to the political parties as the best one to take forward.

Action for political parties

Each political party should produce its manifesto for the next general election and set out its policy agenda for government in ways that enable the highest standards of scrutiny.

Political parties should agree to scrutiny of their manifestos by an official body such as the OBR.

Action for media

Media organisations should review their policies and practice around elections to plan for improvements that will enable their audiences to better understand the policy agendas and manifestos of the parties.

Action for institutions including regulatory bodies

Institutions that scrutinise and can enable better scrutiny by others of the policies in manifestos should review their approach and take forward improvements from previous election cycles.

Action for civil society

Civil society organisations should consider how their role in scrutinising political parties’ policy proposals can be improved to give their supporters and the wider public better access to information on what parties are setting out.

Chapter 6: Reform electoral advertising

Political parties should accept the need for accountability and move to independent oversight of their advertising practices

Recommendation The leaders of major political parties should commit to having their advertising independently regulated according to clear principles of decency, honesty and truthfulness. Rather than wait for a cross party consensus to emerge they must show bold leadership by making a unilateral commitment before the next general election[86] to the independent regulation of their own advertising to incentivise others to do likewise.


Political parties cannot be allowed to influence our votes using falsehoods and misrepresentations

Misleading electoral advertising damages democracy, yet it persists unregulated and uncountable. Full Fact’s own work in the run up to and during the last general election campaign in 2019[87] revealed the use of inappropriate and misleading campaign tactics, mirroring the Electoral Commission’s concerns about the misleading content and techniques it had seen in the same election[88].

The Coalition for Reform in Political Advertising reviewed some of the advertising of the main political parties in the context of the 2019 General Election, finding a significant proportion of election advertising to be misleading[89].

And it is not just general elections. The latest Election Advertising Review Panel set up by Reform Political Advertising covered political party advertising from the 2022 local elections and observed what it called an ‘alarming amount of grossly misleading election advertising from all main parties’.[90]

During the UK’s 2022 local council elections, the Labour Party ran four Facebook adverts[91] which included a misleading claim that families under the Conservatives were £2,620 worse off—a claim that Full Fact had fact checked previously,[92] including asking how the figure was calculated, and asking Labour to make a correction. The claim in these adverts was an estimate based on unreliable assumptions, and excluded the impact of wages and benefits.

These issues can equally occur outside of the formal election periods. For example, prior to the 2019 General Election campaign we fact checked a Conservative Party Facebook advert that seemingly altered the headline of a BBC News article on an education spending announcement to make the government appear more generous.[93]

Political parties are being allowed to try and influence our votes and therefore our elections, using falsehoods and misrepresentations. This cannot be allowed to continue.

Fix the lack of adequate regulation of political advertising in the UK

Advertising in the UK is regulated largely through a system of self-regulation overseen by the Advertising Standards Authority (ASA)[94]. The primary purpose of the ASA being to make sure adverts across UK media stick to the Advertising Codes set by the ASA’s sister organisation the Committee of Advertising Practice (CAP). The central principle of the Codes is that all adverts must be “legal, decent, honest and truthful”.

Unfortunately, this principle does not apply to political advertising with a principal function to influence voters in elections, which is exempt under rule 7 of the Code[95] and is not regulated by the ASA. Whether something is political advertising depends on whether “the nature and function” of the claims within the advert is principally aimed at influencing voters in local, regional, national or international elections or referendums. If it is considered to be political advertising it will not be regulated by the ASA. This is the case irrespective of who has published the advert (i.e. it is not just relevant to adverts from political parties themselves).

The decision to exclude political advertising from the ASA’s remit was taken by the ASA/CAP itself shortly after the 1997 General Election, citing concerns that the impartiality of the ASA could be damaged by rulings for or against political parties, and the fact that complaints were only likely to be ruled upon after an election had taken place. Political advertising has remained outside of scope since.

Since then there have been various attempts to relook at the issue, by Neill Committee on Standards in Public Life (in 1998)[96], the Electoral Commission (in 2004),[97] and more recently a Lords Committee in 2020.[98] In each case it was suggested that the solution lies in a form of voluntary self regulation by the parties.

Nothing has happened, with the major political parties showing little interest in accepting any accountability or independent oversight of their advertising practices.

The consequence of this lack of will is that adverts which would never be allowed in a commercial situation are used and permitted without consequence, within a cultural context where voters might be aware that advertising in general is regulated, but could be unaware that political advertising is exempt. Parties can act with impunity when making claims in their ads in a way which no other non-political organisation could.

If you see an advertisement that is trying to sell you a product in a misleading way you can complain. There is no independent body you can complain to when a political party tries to influence your vote in the same way.

These problems are exacerbated by the emergence of digital advertising techniques capable of being targeted at small, specific groups of people – microtargeting – meaning that, unlike in the offline world, no two people experience an election in the same way.

Campaigners now run multiple versions of the same ad, rapidly testing them to identify which works best for which group, and spreading those that generate the most online engagement. Campaigning is also no longer confined to the pre-election period, with people being targeted on an ongoing basis throughout the year. The risks of harm to the integrity of our elections and our ability to make informed democratic choices have never been higher.

The Government has also failed to address the issue, with the only real nod to these changing dynamics being the introduction of a digital imprints regime intended to help voters understand who has created and paid for specific adverts they see online. Although welcome, this reform barely scratches the surface of the wider problems, and does nothing about the content of misleading adverts, many of which could go undetected due to the difficulty of scrutinising micro-targeted advertisements.

The UK is now facing yet another election in which political advertising is inadequately regulated, and our vote unprotected from abuse. It does not have to be this way.

Establish an independent system to oversee honest and truthful political advertising

One obvious and attractive option would be for the ASA’s regulation of political advertising to be reinstated. The ASA regulated political advertising in the UK prior to 1999, and other very similar organisations such as the New Zealand ASA still play this crucial role.[99] Unfortunately, the self regulatory nature of this system makes it all but impossible without agreement from the political parties, and crucially the ASA themselves. The ASA, while supportive of the principle that political advertising should be regulated, has expressed a strong reluctance to do that regulation itself on the grounds that they consider it would be inappropriate and unworkable for a non-statutory regulator funded primarily by advertisers to do that role.[100]

An alternative approach would be for the parties to agree to sign up to their own separate but independent system that could oversee the honesty and truthfulness of their adverts. This could for example be through an independent body or committee established for that purpose.

This would be consistent with the the House of Lords Democracy and Digital Technologies Committee’s recommendation in their Report on ‘Digital Technology and the Resurrection of Trust’ which recommended that:

‘The relevant experts in the ASA, the Electoral Commission, Ofcom and the UK Statistics Authority should co-operate through a regulatory committee on political advertising. Political parties should work with these regulators to develop a code of practice for political advertising, along with appropriate sanctions, that restricts fundamentally inaccurate advertising during a parliamentary or mayoral election, or referendum. This regulatory committee should adjudicate breaches of this code.’

This sort of solution is also supported by the main campaigning group in this area (Reform Political Advertising [101]) who are calling for the political parties to agree to a Code of Practice that would be drafted and administered by a cross parliamentary team along with representatives from a range organisations such as the ASA, Electoral Commission, ICO and Ofcom and have an independent chair.

The Johnson administration pushed back on such regulation, saying it is ‘a matter for voters to decide whether they consider materials to be ‘accurate’ or not’ and that the matter ‘is best ‘regulated’ by an independent free press’. It also raised concerns about vexatious and politically motivated complaints. This sort of response is completely inadequate and abdicates responsibility for addressing a system that is clearly not working. The voting public deserves better when there are solutions that can be applied in elections to overcome these challenges.

Research shows there is overwhelming support for there to be rules for misleading claims in political ads and this cuts across political persuasion. YouGov data shows 87% of the UK public support such rules for factual claims.[102]

The cross party consensus that would be required to restore trust in our politics has continually failed to emerge. The major political parties have to date shown very little will to seriously engage with potential reform to the way political ads are dealt with. It is time for this to change. Both the Leader of the Labour Party Keir Starmer, and the Conservative Party leader Rishi Sunak have claimed that they want to restore trust in politics. Breaking with old thinking on political advertising would be one good place to demonstrate that intent with action.

Action for political parties

Each political party should commit to having their advertising independently regulated according to clear principles of decency, honesty and truthfulness.

Chapter 7: End deceptive campaign practices by political parties

Parties must stop using misleading formats to gain votes, and new rules should be put in place

Recommendation Political parties should commit to honest campaigning practices by pledging not to deploy deceptive campaign practices that risk misleading voters and undermining trusted independent media institutions. Each party should commit to new rules on honest campaigning practices. Government should develop and take forward targeted legislation and regulation that drives out campaign material that is formatted in ways that can mislead, including stronger requirements about the visibility and legibility of imprints, both offline and online.


Full Fact continues to see instances of egregious misleading campaign material offline as well as online. For example, voters have been targeted with campaign leaflets masquerading as polling cards, and campaign leaflets masquerading as independent newspapers. These examples risk undermining trust in important institutions in our democracy, the local press, and the electoral system itself.

Full Fact believes that there is a strong case for legislation to prevent campaign material masquerading as something else, and for stronger requirements about the visibility and legibility of imprints both offline and online. The origins of the material should be displayed in big, visible, immediately identifiable ways, not just in small print.

Example 1

In the 2019 general election campaign, a leaflet stated that the “The Liberal Democrats are winning across the country” and, as evidence, includes a couple of quotes from the media.[103] One of those quotes was attributed to the Guardian: “Lib Dems winning and on the up after by-election victory.” Yet, those were the paraphrased words of party leader Jo Swinson, not the Guardian. In their campaign leaflet, the Liberal Democrats edited the headline to remove the reference to Jo Swinson, falsely presenting it instead as a direct quote from the Guardian. Whilst the leaflet was not dressed up as a newspaper, it was misleading and irresponsible.

Example 2

During a 2019 UK general election campaign TV debate between Conservative Prime Minister Boris Johnson and Labour Party Leader Jeremy Corbyn, the Conservative press office renamed its Twitter account to ‘factcheckUK’. Full Fact called out the behaviour on Twitter and in the press as inappropriate and misleading. Twitter afterwards stated that its global rules prohibit misleading behaviour, and promised corrective action if there were further attempts to mislead people by editing verified profile information.[104] The Electoral Commission, issued a statement the next day calling on “all campaigners to undertake their vital role responsibly and to support campaigning transparency.”[105]

Whatever the Conservative party’s intention was, this event was certainly noticed. As part of a regular poll series, the public was surveyed on what events or stories they’d noticed: the most noticed events were “lies / don’t trust politicians”, with the factcheckuk story coming in 5th place.[106]

In our 2020 Full Fact Report, we wrote, “We don’t know whether any of [these] tactics … were a carefully constructed battle plan or the parties testing out new ideas. Neither can we say they mark the start of a trend - the changing nature of the online world means it isn’t possible to predict what specific tactics might be used.”[107] Three years later, we have politicians regularly speaking about the need for honesty in politics, yet repeatedly failing to behave in basically honest ways such as admitting mistakes and correcting the record, as well as overseeing the deceptive practices outlined here.

It is not just Full Fact that is concerned about these deceptive practices. In the 2019 General Election all three of the main UK parties attempted to win in constituencies by distributing partisan freesheets that looked like existing local newspapers. The newspaper industry itself asked whether such practices should be banned, especially during an election.

The Society of Editors, as well as the News Media Association (NMA) and other industry leaders have questioned whether politicians are being honest if their party can attempt to deliberately mislead voters by disguising partisan messages in the look and feel of an independent and trusted local newspaper.

Some within the parties have argued that there is no desire nor attempt to deceive by these actions. Newspapers figures have retorted that, regardless of intention, it is not an acceptable practice. Parties can have print and digital material with clear party branding, their own branded publications with a clear party name and logo.

The parties should stop risking any voter mistaking a political freesheet for a version of the local paper or thinking their local paper is backing a particular party.

Rules that allow political parties to pass off their vote-seeking newsletters as local, trusted, independent newspapers need to be changed.

Whilst Full Fact, the Society of Editors and others will continue to expose such passing off, it is time for the political parties to consign such dishonest practices to the past.

Action for political parties

End deceptive campaign practices that involve using formats that pretend to be something they are not, and commit to new rules for honest party campaigning practices.

Action for government

Explore the introduction of targeted legislation against masquerading campaign material, and strengthen requirements about the visibility and legibility of imprints both offline and online.

Chapter 8: Protect electoral integrity, particularly in the online space

Government, Parliament and other authorities must act in recognition that the UK does not have adequate protections for our elections.

Recommendation The Government, Parliament and regulators should strengthen protections against harmful misinformation and disinformation in elections by addressing flaws in the Online Safety Bill, establishing a UK Critical Election Incident Public Protocol, ensuring adequate social media company policies, and promoting collaborative prebunking.


The next general election is expected in 2024. If conducted under current arrangements it and further elections will be vulnerable.

This is despite some welcome recent changes, like the introduction of digital imprints. Other developments, particularly in legislation, accountabilities and arrangements, are flawed, or inadequate. Much needed new changes have not been developed.

The Online Safety Bill does not have the provisions needed to protect our elections from harmful misinformation and disinformation. Arguably, the legislation intended to reduce harms may even create the conditions for more problems on election integrity.

The following are some of the concerns Full Fact has in the area.

Address how new foreign interference offences will work to address state-sponsored disinformation campaigns in elections

The Government has introduced the National Security Bill to bring in a new foreign interference offence. That offence will then be linked to the list of priority offences listed in the Online Safety Bill. The general offence of foreign interference has a set of conditions which must all be met associated with it and it has elections provisions. This appears intended to include tackling harms such as state-sponsored disinformation campaigns in elections.

There are many questions around these offences in the National Security Bill and no prosecution guidance yet exists. How companies and Ofcom make the necessary judgments about applying the Online Safety Bill’s illegal content duty to these foreign interference offences is uncertain.

Address problems in the Online Safety Bill that may impact elections

The Online Safety Bill contains a provision (under Clause 146 ‘Secretary of State directions in special circumstances’) enabling the Secretary of State to give Ofcom directions when they consider there is a threat to the health or safety of the public, or to national security. This is focused on directing Ofcom to respond to a specific threat through the prioritisation of its media literacy functions, or requiring certain internet companies to publicly report on what they are doing to respond.

This clause has come under the spotlight, not least for its apparent breadth and lack of clarity. Amid calls to remove this clause or narrow it to emergency situations for limited periods, clarity is needed on what powers are warranted in relation to emergencies which take the form of a threat to elections (as well as Ofcom’s independence and role).

Also of potential concern is the lack of clarity around the provisions on protecting content “of democratic importance” in Clause 13 of the Bill. The definition is very broad,[108] raising questions about whether it could result in unintended consequences. For example, whether it could result in platforms unintentionally protecting disinformation intended to undermine an electoral process because it is unclear whether that content was intended to contribute to democratic political debate.

Ofcom will need to ensure that these potential overlaps and opacity, and the unintended consequences that could arise, are properly addressed when developing the relevant codes of practice. It is notable that local authorities (the organisations responsible for the administration of elections) have raised similar concerns about this lack of clarity though their Local Government Association.[109]

Establish a UK Critical Election Incident Public Protocol for alerting the public to incidents or campaigns that threaten the UK’s ability to have a free and fair election

In last year’s report, we set out the need for a protocol for warning the public about threats identified by the security services during an election campaign.

Canada has a protocol[110] for such situations, but the UK does not. This leaves us without a solution in place to protect and defend our electoral system and processes.

Having a protocol enables a non-partisan determination of whether to inform the public that an incident that threatens the integrity of an election has arisen. Unless there are considerations such as drawing attention to a threat reducing trust in the electoral process disproportionately, the public can then be informed about the incident and any steps they should take to protect themselves. Canada’s successful model has been independently assessed[111] and could be adapted for the UK. This would help to secure public confidence in how elections are protected.

The Elections Bill and/or the Online Safety Bill could have provided an enabling environment for such a protocol to be agreed. In the absence of that the Minister for the Cabinet Office, who has responsibility for both defending democracy and for electoral law, should now initiate a process to bring about a UK Critical Election Incident Public Protocol through non-legislative means.

Ensure the policies of online platforms are positive for UK elections and set by a transparent democratic process

Social media platforms and search engines continue to play a major role in elections, with many companies choosing to enact their own policies and create special election products or services for users.

Internet companies currently have different approaches to protecting election integrity:

  • Facebook’s misinformation policy covers voter or census interference.[112] Politicians and candidates continue to be exempt from Meta’s third-party fact-checking programme, allowing them to make or repeat false claims during elections with impunity, although some misinformation related to the voting process is exempt from this.[113] Ahead of the 2019 UK general election, Meta increased transparency and controls on political advertising, and encouraged voter registration and voting through informational reminder campaigns.[114]
  • YouTube has an elections misinformation policy covering voter suppression, candidate eligibility, incitement to interfere with the democratic process, election integrity and distribution of hacked materials, with detailed examples.[115] For the 2019 European Parliament elections, YouTube offered additional products such as candidate information panels in search, although the omission of this for UK voters on the company’s website and written evidence to the House of Lords Committee on Democracy and Digital Technologies suggests this was not offered during the last UK general election.[116]
  • Twitter has a civic integrity misleading information policy mainly focused on elections interference including voter suppression and misinformation about voting processes and outcomes.[117] Twitter also enforced a global political ads ban in the month preceding the UK election (although it is not clear that the UK election was a hook for this) and launched a tool to enable users to report potential election-related misinformation.[118]
  • TikTok does not accept paid political advertisements and its election integrity page gives examples of how its global policies and recommendation system might apply to or affect election related content.[119]

Many of these voluntary measures are laudable, but they are variable among companies, mutable and sometimes gravely inadequate. The violent attack on the US Capitol on 6 January 2021, which attempted to overturn the result of the 2020 US election, shows how inadequate policies and action on election related misinformation can be. Platforms which affect our democratic systems can also be hostage to dramatic shifts in policy with changes in ownership and staffing.

The UK’s election rules need to be consistent and created through an open transparent democratic process. The Canadian Declaration on Electoral Integrity Online offers one model of how to do this, where internet companies worked with the government to publicly set out ways they would both be accountable to citizens and civil society in supporting election integrity and transparency.[120] Among several goals, the declaration aimed to increase efforts to combat disinformation that threatened democratic processes and institutions, increase transparency in political advertising and to publicly inform citizens about companies’ efforts to safeguard democratic debate.

In last year’s Full Fact report, we argued that the Online Safety Bill gave little assurance of stronger election integrity, and recommended that the Online Safety Bill should follow the EU's Digital Services Act by requiring large platforms to include ‘actual or foreseeable effects related to electoral processes’ in their risk assessments (in its Article 26). Unfortunately, there are no such protections in the UK Bill.

The Bill should be amended to underpin the establishment of a Declaration on Electoral Integrity Online similar to the model used in Canada. This would afford a much higher level of protection against homegrown and foreign interference and harmful misinformation to UK citizens, our electoral processes and institutions, in an open and transparent way.

Ensure digital imprints work effectively

As we highlighted in last year’s report, the Elections Act 2022 introduces a welcome and long-needed requirement for digital imprints. Full Fact had been advocating for this change for some time given the obvious gap compared to laws requiring imprints on printed election material.

Digital campaigning material will now need to display a digital imprint, with the name and address of the promoter of the material or any person on behalf of whom the material is being published, and who is not the promoter. The law applies to paid political material published as an advert as well as organic material if it is published by or on behalf of certain political entities.

The Electoral Commission has set out guidance[121] for the operation of the new system and how to comply with it. The guidance is due to come into force in November 2023.

Digital imprints aim to help to deliver transparency about who is spending money to influence votes. They will also enable election material to be attributed to those promoting it. New systems bring risks of unintended consequences or compliance loopholes. Post-legislative scrutiny of the new regime is needed after the next general election. We urge the government and relevant bodies including those in parliament such as the Public Administration and Constitutional Affairs Committee to ensure this happens.

In the interim, stakeholders should seek to build an evidence base so that the next government, relevant committees and other actors can make an informed assessment about what further changes may be required for a robust system. The digital imprint rules must then be updated if that proves necessary.

Build partnerships for collaborative prebunking ahead of the election

An open society approach to harmful misinformation and disinformation requires effective legislation and regulation for election integrity, but this will never be enough. Elections rely on the media, organisations like Full Fact and others monitoring the information environment and countering false and misleading claims. Debunking and fact checking done well are proven effective ways to address mis- and disinformation including reducing its spread.

At Full Fact we are also champions of prebunking, an upstream process of proactive problem prevention which identifies what false and misleading claims or tactics may arise, and then warns audiences in advance.

Collaboration between media organisations, fact checking organisations, social media platforms and others is key for pre-bunking to be successful, especially in relation to elections and harmful misinformation and disinformation. There are efficiencies to be gained in techniques like sharing intelligence on what signals are being seen, pooling resources and expertise for foresight, and working together for engaging communications that have the needed reach to prepare the public for possible election-related mis- and disinformation. Prebunking is more effective when media and organisations work together and alongside wider communities, with policymakers and donors providing an enabling environment.

We would like to see partnerships develop ahead of the next general election to provide prebunking and related services to the UK public. Greater digital and media literacy is also required to build resilience against online threats and tactics of manipulation in elections.

Action for government

Bring forward amendments to address problems and loopholes in the Online Safety Bill that may impact elections.

Establish a UK Critical Election Incident Public Protocol to secure public confidence in how elections are protected, given they are vulnerable to interference.

In consultation with civil society, explore what compact is required for internet companies to have policies and practices to support election integrity in the UK based on appropriate principles and transparency.

Action for parliamentarians

Establish the basis for post-legislative scrutiny of part 6 of the Elections Act and provide this after the next general election.

Action for social media companies

Work with government, regulators and civil society to agree on a common set of policies and action to protect electoral integrity online.

Action for regulators

The Electoral Commission should monitor and assess the effectiveness of the new digital imprints regime, and identify any further improvements that may be needed to ensure greater transparency around the origins of political or election-related electronic material.

Action for media and others actors civil society

Forge partnerships ahead of the next general election to help provide prebunking and related services to the UK public.

Part 4: Addressing bad information online through effective regulation

This Part of the report explores how the online UK information environment can be improved to tackle bad information in the context of the current Online Safety Bill, and how harmful misinformation should be addressed under that Bill and future law and regulation.

This follows on from our 2022 report Tackling online misinformation in an open society, where we set out our vision for how the emerging Bill should have approached the regulation of bad information.

In this year’s report we examine the Bill’s failures, including the issues created by the Government’s further changes during the Bill’s passage through the House of Commons. In particular we look at the implications of the decision to abandon attempts to address (non criminal) content harmful to adults and the consequences that will have for tackling harmful health misinformation, as well as highlighting current failures to realise the potential that enhancing media literacy could have in raising people’s resilience to bad information.

Finally, we look ahead, highlighting the need to ensure that the inadequacy of the Online Safety Bill is not the end of the story when it comes to improving the information environment, and that the regulatory regime keeps pace with emerging challenges such as generative AI.

Chapter 9: Ensure the Online Safety Bill tackles bad information

Turn around the Bill’s failure to properly address harmful online misinformation and disinformation

Recommendation The House of Lords must amend the Online Safety Bill so that it effectively tackles the harms to society, democracy and individuals caused by bad information and protects our freedom of expression. This means ensuring that platforms have transparent health misinformation risk assessments and policies, are required to prefer content moderation options other than take down wherever possible, and provide access to data for researchers and civil society. Ofcom’s role and its advisory committee on misinformation and disinformation must be strengthened.


The Online Safety Bill was introduced to Parliament in March 2022. This overdue but essential legislation will impact each one of us.

The central purpose of the Bill was, and remains, to introduce a new regulatory system focussed principally on reducing harms arising out of the operation of social media platforms and search engines.

Unfortunately, despite a long process of policy development and consultation involving Green and White papers, and a Draft Bill presented for pre-legislative scrutiny, the Bill still lacks a credible plan for addressing harmful misinformation and disinformation online.

We cannot leave it to internet companies, with their commercial convenience and censoring instincts, to do what they like on harmful misinformation and disinformation. The House of Lords must seize this final opportunity to amend the Bill to ensure that we have a proportionate, transparent and effective regulatory regime that both addresses bad information and protects our freedom of expression.

Don’t let the Bill be a missed opportunity to address harm

Full Fact has long called for the Government to take action in this area. False and misleading information has circulated online for decades, causing real harm including to public health, public debate and public trust. We have described this in detail in various reports,[122] including the first year of the pandemic which made harmful misinformation apparent to all.[123]

As Full Fact set out during the period of pre-legislative scrutiny[124], in our Full Fact Report 2022,[125] and in our evidence to the Public Bill Committee[126], the Bill presented an opportunity to rework the systems that have too often failed in the face of harmful misinformation and disinformation.

We cannot go on relying on the internet companies to make decisions without independent scrutiny and transparency. Good legislation and regulation could make a significant difference in tackling dangerous online misinformation.

The Bill that was introduced to Parliament was a missed opportunity. Perhaps most fundamental is that the scope of the legislation has narrowed from earlier proposals so that the problems of misinformation and disinformation will for the most part not be directly addressed through this Bill.

The legislation that was ultimately introduced to Parliament focuses only on the prevention and mitigation of physical and psychological harm to individuals and eschews any ambition to address the harms to our society and democracy that can arise from the unregulated and opaque decisions and omissions of internet platforms. This is despite the Government’s own counter-disinformation toolkit[127] stating that:

"Manipulated, false and misleading information can:

  • threaten public safety
  • fracture community cohesion
  • reduce trust in institutions and the media
  • undermine public acceptance of science’s role in informing policy development and implementation
  • damage our economic prosperity and our global influence; and
  • undermine the integrity of government, the constitution and our democratic processes.”

What remains is a regime that is too narrowly focused and too structured around the regulation of individual pieces of content. As a result, addressing bad information online will, for the most part, still be left to the whim of platforms that are more likely to be accountable to shareholders in California than UK legislators in Westminster or a regulator in Riverside House.

The scope of the legislation should be revisited so that the Bill tackles the harms to our society and democracy – as well as the harms to individuals – as set out in the government’s counter-disinformation strategy. Unfortunately, the Government has taken the opposite approach, compounding the deficit through a series of further changes to the Bill.

Address the mistaken further narrowing of scope

In an attempt to head off criticism about the Bill’s approach to non criminal content the Government arranged for the Bill to be recommitted to Commons Committee stage where it then made some significant changes to the approach to the adult safety provisions.

These changes involved:

  • Removing the requirement for platforms to assess the risk of harm to adults occurring through their platform (originally found in Clause 12).
  • Removing the requirement for platforms to be transparent about how they treat certain types of priority content harmful to adults (originally found in Clause 13).
  • Introducing new duties around consistent application of terms of service - but leaving it to platforms to decide what to address in those terms of service.
  • Amending the adult user empowerment duties (Clause 14) to allow users to filter out certain types of content (NB: misinformation will not be covered by the user empowerment duty).[128]

One of the consequences of these changes, and the removal of ‘priority’ content, is that one of the few areas of misinformation that would have fallen within the concept of content harmful to adults—harmful false health content—will now fall outside of the legislation’s remit. This runs contrary to the promise the Government made during the initial stages of the Bill’s passage.[129]

We discuss the consequences of a failure to address harmful health misinformation further in Chapter 10, but it is clear that an urgent rethink is required, particularly on the removal of the requirements to undertake risk assessments for harmful content. In practice many platforms will already undertake these sorts of assessments so they should be subject to proper transparency and regulatory oversight to ensure that they are adequate and can be scrutinised effectively. The Government must restore the requirement for companies to undertake adult risk assessments to the Bill, and ensure that platforms are required to have a clear policy on harmful health misinformation in their terms of service. It was therefore disappointing to see the Government reject the House of Lords Communications and Digital Committee’s recommendation that the adult safety risk assessments should be restored to the Bill.[130]

The illegal content provisions are not the solution to tackling harmful bad information

It is sometimes suggested that the solution lies in the illegal (i.e. criminal) content safety duties, but the majority of harmful misinformation that Full Fact sees is unlikely to be clearly identifiable as criminal in nature, and addressing it through the approaches for criminal conduct and content (which focus heavily on take down) would in many cases be inappropriate and risk disproportionately interfering with the freedom of expression of users.

The new false communication offence (Clause 151) is, for example, not an appropriate tool for moderating misinformation and disinformation content at internet scale. Assessing knowledge of falsity and criminal intent through platform algorithms presents risks over moderation of lawful content and raising freedom of expression concerns.

Nor does the solution lay with the proposed foreign interference offence in the new National Security Bill.[131] Although it is important that the regime ensures that platforms address foreign state backed disinformation, the nature of this offence means that they will face similar challenges to those that arise with other offences, i.e. using machine learning to make accurate judgments about an individual's behaviour and intent, and (in the case of this particular offence) the involvement of a foreign state actor. More pertinently, the offence is of limited relevance to the majority of harmful online misinformation.

Rethink and rework the Bill to protect freedom of expression online

Concerns about how to regulate harmful content while best protecting the right of freedom of expression has rightly been a central focus of the debate around the Bill.

A regulatory regime which oversees the moderation of users’ content was always going to present challenges when it comes to ensuring that these rights are sufficiently protected. Unfortunately, the debate in this area has become increasingly polarised, and characterised by misunderstandings and misrepresentations about what the Bill does and does not do. This was most acute with the content harmful to adults provisions that the Government has now removed from the Bill.

Determining whether any regulation in this space is necessary and proportionate to the harm it seeks to address requires honest debate about what the provisions in question actually do and what already happens now in the absence of such regulation.

In essence the provisions originally in the Bill required platforms to risk assess potentially harmful content, be transparent about how they treat it, and then consistently apply that approach. This applied even if the approach was to allow that content on the service. The problems lay not with this principle of transparency, but a failure to properly and clearly define this harmful content, or more clearly set out what the expectations of platforms are in relation to such content.

Although there were legitimate concerns about the provisions given the lack of specificity of the definitions and reliance on unseen secondary legislation, they did not - unlike the sweeping provisions on illegal content - require the removal or censorship of content.

In the context of harmful misinformation the idea that any regulation of the approach to the treatment of bad information such as health misinformation can only result in mass censorship by platforms is a false dichotomy. It is also starting from a false premise, because internet companies already censor legal social media posts at a vast scale.

Regrettably, this is how the debate has often been framed. And the Government has taken the wrong path in response.

Although the new provisions on having regard to freedom of expression, and the consistent application of terms of service, may improve the situation to some extent, when it comes to how platforms tackle misinformation and disinformation the Bill should be more explicit. The Government should approach freedom of expression concerns by setting out the need for proportionate responses to those risks more clearly. We cannot leave it to internet companies, with their commercial and political incentives, and often censoring instincts, to do what they like.

There are a growing number of resources and methods that can be used so that restricting or removing such content should rarely be necessary. For example:

  • Ensuring that reliable information from authoritative sources is available on platforms.
  • Proactive provision of such information (such as the Covid-19 information centres Facebook and others established).
  • Friction inducing initiatives (for example including ‘read-before-you-share’ prompts).
  • Labelling and fact checking to more clearly surface false information.
  • Better user control over the curation of information, and better human moderation.
  • Increasing the resilience of a platform’s users by taking steps to improve their media literacy.

The Bill should make it expressly clear that we prefer content-neutral and free speech-based interventions over content-restricting ones. A solution of last rather than first resort. This requirement could be supported by an Ofcom code practice on recommended approaches for proportionately reducing harm from misinformation and disinformation. Such a code should include the use of fact checking in proportion to reach and risk, along with other forms of mitigation which can help to protect people’s freedom of expression, including user control over curation and better human moderation.

The approach being pursued by the government is a significant misstep. It will further embed an approach to content moderation that is based primarily on whether or not to remove it (or in the case of the user empowerment duties- filter it out). This sort of binary approach is particularly inappropriate for dealing with misinformation where (as we have set out above) there are a range of other approaches available.

The scope of the news publisher provisions should be revisited

There remains a risk of unintended consequences arising from the Bill’s enhanced protections for the media. In particular because of the broad definition of "recognised new publisher" in Clause 50. Like many others, we are concerned that this definition may be too easy to meet, raising the potential for exploitation by those wishing to spread harmful disinformation by deliberately establishing themselves to benefit from exemption and protections designed for legitimate news outlets.

This risk could be exacerbated by the Government’s subsequent introduction of the media ‘must carry’ duty in Clause 14, which prevents platforms from taking action on news publisher content without first consulting the publisher and giving them a chance to object to the action the platform proposes to take. The broad definition of ‘taking action’ in this context will mean that, as well as preventing platforms from taking down or restricting users’s access to content, they will not be able to take other action such as, for example, adding a warning label to content.

This means that a platform that identifies harmful content such as disinformation could not even temporarily label it while awaiting a response from the publisher. Given the concerns about these media protections being exploited to spread harmful content, and the speed at which that can occur, the scope and definitions in the provisions should be revisited.

Ofcom’s role must be clarified and strengthened

The Online Safety Bill, and the designation of Ofcom as the online safety regulator present an opportunity for an independent public body to be given a proactive role as a strategic and a day-to-day regulator with responsibility for identifying and addressing harmful misinformation and disinformation issues.

Perhaps symptomatic of the wider problems with the Bill’s focus and approach this has not happened, leaving glaring gaps when it comes to protecting us from harmful bad information:

  • The Bill is too focussed on the regulation of the day to day online environment. The Bill should instead give Ofcom, as online safety regulator, a clearer role in responding to information incidents and crises.
  • The absence of opportunity for Ofcom to set the standards for proportionately reducing harm from misinformation and disinformation in ways that are compatible with people’s freedom of expression.
  • The lack of a new, stronger, media literacy duty for the regulator.
  • An Ofcom Advisory Committee on Disinformation and Misinformation with no identifiable powers or active role in tackling harmful misinformation.

As a regulator taking on huge new responsibilities in a short timeframe Ofcom will understandably face huge pressures and competing priorities. Without a legislative mandate and imperative, action on harmful misinformation will be drowned out.

Information incidents

Some of these gaps discussed above are covered elsewhere in this Part of the Report. But a particularly big gap is the lack of provision about how the online safety regulator can and should respond to information incidents. Events such as terror attacks or pandemics can corrupt the information environment by increasing the complexity of accurate information, creating confusion or revealing information gaps - all of which can result in an increase in the volume of harmful misinformation and the speed at which it spreads, and opportunities for malicious actors to spread disinformation. We describe these moments of heightened vulnerability as ‘information incidents’. Information incidents are often characterised by a proliferation of inaccurate or misleading claims or narratives, which relate to or affect perceptions of our behaviour towards a certain event or issue happening online or offline.

Since 2020, Full Fact has been working with internet companies, civil society and governments to create a new shared model to fight crises of misinformation (the Framework for Information Incidents) to help decision-makers understand, respond to and mitigate information crises in proportionate and effective ways.

This sort of thinking now needs embedding into the new regulatory regime. Unfortunately, we do not think that harmful misinformation and disinformation that arises during periods of uncertainty - either acutely, such as during a terror attack, or over a longer period, as with a pandemic - is effectively dealt with in the Online Safety Bill. At present too much appears to be left to initiatives such as the Government’s Counter Disinformation Unit which, if they seek to counter such issues, operate without scrutiny or transparent oversight.

Although Clause 156[132] gives the Secretary State powers of direction during certain ‘special circumstances’, those provisions simply allow the Government to mandate Ofcom to prioritise its media literacy function, or make internet companies report on what they are doing in response to a crisis. The provisions do little to meaningfully empower Ofcom itself, and risk undermining the regulator’s independence.

The Bill should provide for Ofcom to introduce a system whereby emerging incidents can be publicly reported, and different actors such as fact checkers, news organisations, community representation groups and service providers can request that Ofcom bring together a response group to discuss severity and response.

Strengthen the Advisory Committee on Disinformation and Misinformation and protect it from regulatory capture

The presence of the clause on requiring Ofcom to establish an Advisory Committee on Disinformation and Misinformation[133] remains a bit of an outlier in the Bill. The purpose of the Committee is to advise Ofcom despite the fact that, as this Chapter sets out above, misinformation and disinformation has steadily been squeezed out of scope. As a result the Clause in its current guise seems to serve limited practical purpose, other than as a potential distraction from the wider failures of the Bill when it comes to harmful misinformation and disinformation.

The Committee must be more clearly empowered with an eye to the future of the regulatory regime in this space. We would like to see its role clarified and strengthened so that Ofcom are receiving the advice and input they need to properly address issues of harmful misinformation and disinformation. In particular, its remit should be widened to expressly include the following:

  • Advising on and overseeing Ofcom research on the harms caused by disinformation and misinformation.
  • Reporting on the emerging patterns of behaviour driving misinformation and disinformation, how people interact with content, the causes of harmful information, and the proportionate responses to those issues.
  • Advising on the formation of relevant aspects of their codes of practice.

Although we recognise and emphasise the importance of collaborative responses to misinformation including working with the internet companies, there are potential risks if internet company representatives sit on this Committee when part of its role is to advise Ofcom on what providers of regulated services should do. The presence of well resourced platforms opens up the risk of regulatory capture that must be counterbalanced. In order to address this Full Fact would like to see that, as a minimum, protections are put in place to ensure that the Committee is not Chaired by a platform representative and that, where necessary, members of the Committee can hold discussions without platform representatives present. It will also be important to ensure that the committee has access to the data and information necessary to advise Ofcom effectively.

Transparency and access to data must be improved

Access to good data about the operation of social media platforms is vital in holding internet companies to account and tracking the extent of online harms, building understanding of them and how they might be addressed.

Although the Bill will grant Ofcom powers to request and obtain information to scrutinise the workings of platforms, access for the wider ‘ecosystem of inspection’—including academic and civil society institutions—is currently very limited. The Bill does nothing to address this problem and stands in stark contrast to other regulatory regimes such as the EU’s Digital Services Act.[134]

The result is that access to important safety critical data will be left to companies at the whim of companies, who can remove or restrict such access at their discretion. See for example concerns about the potential withdrawal of Meta’s Crowdtangle tool[135].

Too often it has taken a whistleblower or a tragedy to expose safety critical issues in the operation of these platforms.

It must not be left to the companies to decide whether information about the risks on their platforms are made available for public interest focussed research. The Bill should require companies to allow independently verified researchers and civil society organisations access to their data. This could be supported by Ofcom guidance.

Action for Government and Parliament

Urgently amend the Online Safety Bill to ensure that it properly tackles harmful misinformation and disinformation. This must include the following:

  • Revisiting the scope of the legislation so that it properly tackles the harms to our society and democracy—as well as the harms to individuals.
  • Restoring the requirement for companies to undertake adult risk assessments to the Bill.
  • Ensuring that platforms are required to have a clear policy on harmful health misinformation in their terms of service.
  • Protecting freedom of expression by amending the Bill so that it includes clearer provision on how harmful misinformation should be dealt with, including making it expressly clear that we prefer content-neutral and free speech-based interventions to tackle misinformation to content-restricting ones wherever possible (this should be supported by an Ofcom Code of Practice).
  • Giving Ofcom a clearer role in responding to information incidents, including introducing a system whereby emerging incidents can be publicly reported, and different actors such as fact checkers, news organisations, community representation groups and service providers can request that Ofcom bring together a response group to discuss severity and response.
  • Strengthening the remit and role of the Advisory Committee on Disinformation and Misinformation, including:
    • giving it a clearer role in advising on and overseeing Ofcom’s research on the harms caused by disinformation and misinformation,
    • identifying emerging patterns of behaviour and the proportionate responses;
    • making the Committee a statutory consultee on Ofcom codes of practice;
    • ensuring the Committee’s governance protects it from regulatory capture by platforms.
  • Require platforms to give independently verified researchers and civil society organisations access to data.
  • Strengthening the Bill’s approach to Media literacy (see Chapter 11).

Chapter 10: Tackle harmful health misinformation

Government must prioritise addressing harmful health misinformation in online safety regulation and with a multifaceted set of responses and actors

Recommendation The Government must start taking harmful health misinformation more seriously by bringing it back within scope of the Online Safety Bill, and by ensuring that the UK is better prepared and equipped to collect and communicate good information during future health crises. Both social media platforms and traditional media outlets must play their role in counteracting bad health information more effectively.


Although misinformation is often seen as a social media phenomenon, health misinformation far predates popular use of the web.[136] Perhaps the most famous UK example comes from the 1998 study published in The Lancet which linked the MMR vaccine to autism in children.[137] Although this was later retracted, and refuted by the scientific community,[138]the widespread reporting of the study in print and broadcast media led to a continued public belief in the link between the vaccine and autism and a reduction in parents vaccinating their children against MMR,[139] a legacy that can still be seen in vaccine hesitancy in the 21st century.

What has changed since then is that the internet, and social media in particular, has fundamentally altered the way we communicate, share and receive information. Misinformation spreads more quickly and has far greater reach. Trusted authoritative information is shared on social media platforms alongside false, misleading or harmful content—sometimes indistinguishably—often struggling to compete with more emotive or sensational content. Alongside this online communities, such as the anti-vaccination movement, flourish and grow without geographical constraint.

The consequence of these factors is that health misinformation now regularly finds high prevalence and popularity on social media.[140] At Full Fact we often see this, where our team of fact checkers examine a wide range of claims about medical conditions (as well as on the health system: the NHS, social care and government funding of national health services and other matters of health policy etc.).[141]

False and misleading information is, of course, not unique to content about health. In 2021, Full Fact found that 1 in 2 people reported being targeted with disinformation ‘often’, and 74% of people are worried about the spread of misinformation and believe that false information online has a negative effect on democracy in the UK.[142]

At Full Fact we see first hand how bad information can ruin lives. It promotes hate, and damages democracy. But, what makes health misinformation unique is the direct damage it can cause to people’s physical or psychological health.

Health misinformation that spreads at scale can introduce confusion, make it harder to distinguish truth from falsity, and distract from or undermine medical consensus. This was exemplified during the Covid-19 pandemic. As the virus spread across the world, a flurry of false and misleading information followed. We saw in real time the risks that can come when people do not understand how a virus is transmitted, or how to protect themselves from it. This was exacerbated by failures to provide or communicate information about causes and treatment, and by multiple changes in official advice.

Later in the pandemic we saw confusion and concern about the safety of the vaccine. For example, the initial lack of information about the safety of vaccines for pregnant women had lasting effects, with both women and vaccination centres receiving mixed messages, and pregnant women not being given second doses or thinking they need to start their course again.[143]

The World Health Organisation (WHO) describes these scenarios, that occur during a disease outbreak, as an ‘infodemic’.[144]

But health misinformation goes far beyond Covid-19 and pandemics. We see a wide range of other types of health misinformation through our work. For example, we regularly fact check a range of health claims, including on:

  • sexual health—for example, on Mpox (previously known as ‘monkeypox’).[145]
  • children’s health—for example, on Strep A[146] or childhood vaccines[147].
  • cancer treatments

Taking cancer treatment as an example, Full Fact regularly sees health misinformation relating to cancer risks, treatments and cures on social media. This could be posts falsely claiming lemons treat cancer better than chemotherapy[148] or that tumours are ‘there to save your life’[149], or unproven claims that cannabis oil cures cancer[150] or rubbing hydrogen peroxide on your skin would treat cancer.[151]

These sorts of posts shared online can convince people they could seek alternative treatments to cure their cancer and eschew treatment from medical professionals, or rely on disproven theories or personal testimonies that can’t be verified.

As the cancer charity Macmillan states - no alternative therapies have ever been proven to cure cancer or slow its growth.[152] And Cancer Research UK has stated that one of the biggest risks to an individual in seeking alternative therapies is that they could postpone or decline evidence-based conventional treatments, which might otherwise prolong or even save a patient’s life.[153]

A range of solutions is needed from a wider set of actors

During the pandemic Full Fact and Meta (then called Facebook) co-hosted a virtual conference titled ‘Addressing Health Misinformation: Lessons from 2020’ to discuss the health misinformation challenges experienced in 2020 and to share best practice among internet companies, government, civil society and healthcare bodies. Following that conference Full Fact produced a short report highlighting eight recommendations[154] for all actors when tackling health misinformation challenges:

  1. Make good information available
  2. Use a range of trusted voices to communicate information
  3. Collaborate through sharing information
  4. Take action to suppress misinformation narratives
  5. Approach the problem holistically
  6. Monitor for future threats
  7. Put in place measures to build long-term resilience
  8. Invest in research for the future

These best practice principles still stand, and can have wider application than the Covid-19 pandemic. But they need embedding, and this takes leadership from a range of different actors.

A new system to regulate the social media platforms and search services of internet companies is key for addressing harmful health misinformation

The UK’s current drive to introduce regulation of social media platforms was the best opportunity to ensure an effective, transparent and consistently applied approach to tackling harmful health misinformation online. The failure in this area is discussed in more detail earlier in this report (see, for example, in Chapter 9), but the Government’s decision to drop provisions dealing with content harmful to adults, and the reversal of the promise that this would include harmful health misinformation, is a major setback.

Rather than ensuring that platforms have clear policies for dealing with health misinformation in their terms of service, platforms will be left to their own devices. Options available to platforms will continue to range from leaving it completely unmitigated to simply removing it at scale, all without appropriate regulatory oversight. This threatens not just people’s health, but their freedom of expression.

This dangerous u-turn must be reconsidered. The Government must revisit the Bill and ensure that each of the largest platforms is required to undertake risk assessments for harmful health misinformation on their platform, and then establish a clear and consistently applied policy for addressing it. As we have covered elsewhere in this report —this need not simply be about removing content from their sites.

The provision of good information from authoritative sources, fact checking, addressing the amplification of misinformation, and introducing friction that encourages users to pause before sharing are amongst the many proportionate and transparent responses that can help platforms balance protection of freedom of expression with preventing harm. The Bill should be explicit that these ‘content neutral’ solutions are to be preferred over removal wherever possible.

Similarly, enhancing the digital skills of the platform’s users, to increase their resilience could be a major tool in the regulatory armoury, but instead remains something on which the Bill is almost silent (see Chapter 11 on Media Literacy).

If the Government is not prepared to address this vacuum then Parliament must step in. Members of the House of Lords have one last chance to rectify the situation before the Bill becomes law.

The Government must be better prepared for the next health crisis

Addressing health misinformation during a health crisis— and the likely ‘infodemic’ that will accompany it—is particularly challenging. We wrote in detail in our 2021 report Fighting a pandemic needs good information[155] about how the Covid-19 pandemic completely changed the world we lived in, impacting our health, our work, our social lives and the wider economy. It laid bare the real harm that bad information can cause, and the risks society faces when there are barriers to good information.

Tackling a health crisis like a pandemic relies upon ensuring the better availability, accessibility and communication of good information. Unfortunately the UK’s response to the pandemic was hampered by long-standing failures in public data and communications systems. Years of neglect meant the country lacked good information when it mattered most.

For example, the pandemic exposed a black hole in the UK’s information on social care. As the coronavirus spread through care homes, the lack of easily accessible, aggregated data on the care home population became apparent. Better data on infections and deaths in care homes could have allowed real time monitoring from the start of the pandemic, early detection of problems, and targeted appropriate interventions.[156]

The Government can do more to ensure the better availability and accessibility of good information by:

  • Making a clear commitment to long-term funding for data infrastructure and systems.
  • Establishing a horizon-scanning function for statistics led by the UK Statistics Authority.
  • Leading a programme to identify data gaps in areas of significant societal importance and then fill them.

Equally important is good communication by authoritative sources, particular government and its public bodies. The pandemic again highlighted problems with the way information is often communicated. For example:

  • The initial narrative that the Government was “following the science” risked oversimplifying the process, while the daily briefings often brought so much data they were impenetrable.
  • Government ministers and departments issued conflicting and even inaccurate advice.
  • Instances when ministers apparently attempted to paint a more positive picture by using misleading figures.
  • Responses to intermediaries like Full Fact were too often slow, unclear or inaccurate.

Clear, transparent communication from those in power is essential, both for immediate public understanding and to earn and maintain public trust. And this transparency is even more essential during a crisis. The recommendations that we make around backing up claims with evidence and correcting the record in Chapters 1 and 2 of this report will be equally as important when it comes to communicating information during a future health crisis.

Improve the evidence base through a greater commitment to research

Although there is plenty of research on the prevalence of health misinformation, evidence on the links between this health misinformation and negative health outcomes is more limited. Some academics predict that the proportion of harm caused by health misinformation is likely to be higher than reported due to the reported rates of people adhering to unofficial medical advice[157], but this needs exploring further. We need greater research into impacts that health misinformation can have and what effective, evidence-based, and proportionate responses to dealing with it look like.

Numerous one-off research pieces are insufficient - funding and support for multi-year research and evaluation is needed to build the evidence base so that we can measure changes over time. In the absence of a clearer remit for Ofcom in tackling online misinformation it is the Government that will need to provide the leadership and commitment to make this happen, including with various health bodies.

Platforms must take responsibility

If the Government maintains its current determination to abandon the fight against dangerous health misinformation through the Online Safety Bill then the burden and responsibility will fall more heavily on others. First among those being the social media platforms themselves, as it is there that the greatest power to address the issue lies.

During the pandemic many of the largest platforms took steps to improve the supply of high quality, relevant information from local official sources on their platforms, and announced specific action to reduce the amount of Covid-19 misinformation. Initiatives like Covid-19 factboxes embedded within platforms, redirecting users to authoritative sources within search results, and giving advertising credits to government and public bodies were used to help improve the supply of authoritative information to users.

Unfortunately, this will always be piecemeal without proper regulation. Platforms ultimately make their decisions on the basis of commercial imperatives.

The recent decisions by Twitter to abandon the enforcement of its Covid-19 misleading information policy is a warning of what can happen when platforms wash their hands of keeping their users safe and there is no regulatory safety net to step in. The abandonment of that policy—originally set up to help tackle misinformation in the pandemic—sets a worrying precedent, with researchers expressing concern that the changes in the platform’s approach has led to the volume of toxic material, including anti-vaccine disinformation, surging.[158],[159]

Currently, the approaches of different internet companies towards tackling health misinformation on their platforms are very different. Some, like Facebook, LinkedIn, YouTube, Pinterest and NextDoor have freestanding health misinformation policies with varying degrees of detail and examples of prohibited claims.[160] YouTube has signalled its interest in promoting good health information by appointing a Director and Head of Health to generate high quality content through partnerships with public health bodies.[161]

Others, such as Reddit, TikTok and Snapchat do not treat health misinformation differently from other types of misinformation under their community guidelines.[162] As laudable as some companies’ efforts are to clearly articulate what they do and do not want to see on their platforms, policies can sometimes be hard to find, and it is either difficult or not possible to see how well or how often these policies are enforced. Based on our own experience fact checking online claims, many items of prohibited content escape the net.

The regulatory vacuum we are currently seeing in the Online Safety Bill is no excuse. Platforms themselves must show leadership and take responsibility to ensure that they have clear and transparent policies on the treatment of harmful health misinformation on their platforms, and then apply them consistently. As we have set out, this must take the form of proportionate solutions that utilise a range of content neutral measures. Simply relying on identifying and misleading removing health content at scale is not the answer, particularly given the limitations in moderating such content effectively and proportionately by algorithm.

Traditional media must understand the important role they still play

Traditional media also has an important role to play, particularly given the way its decisions can themselves have consequences online. Broadcasters in particular have a responsibility to consider their output, including how it could be exploited to spread misinformation. We saw this recently when the BBC News Channel allowed a guest to make claims about mRNA vaccines without further context or challenge. The BBC subsequently apologised[163] but by then the interview had been clipped and viewed many millions of times on social media.

As the BBC acknowledged, it is right that broadcasters air a full range of views and opinions. But they must carefully consider the issues they cover and the guests they host, and ensure presenters and producers are ready to offer scrutiny and challenge on behalf of their viewers, particularly where they diverge from medical consensus.

They can also play a role in ensuring that authoritative sources of information are clearly made available. Traditional media sources generally hold themselves to higher standards than many online information sources and can help to counteract and challenge harmful misinformation that occurs in far less regulated online spaces.

Action for government

Amend the Online Safety Bill to ensure that the largest platforms are required to undertake risk assessments for harmful health misinformation on their platform, and then establish a clear and consistently applied policy for addressing it.

Ensure that the UK is better prepared for the next health crisis by investing in better availability and accessibility of data and good health information.

Invest in research on health misinformation and its harms, and put in place adequate systems and coordination within and without the health ecosystem to reduce the risks.

Action for social media companies

Take responsibility, irrespective of any regulatory imperative, by establishing clear and transparent policies for the effective treatment of harmful health misinformation on their platforms, and then apply those policies consistently.

Action for media

Traditional media outlets must use their trusted status to help counteract and challenge harmful misinformation and ensure that they provide sufficient scrutiny and challenge where appropriate.

Chapter 11: Prioritise better online media literacy

Help protect people from harmful bad information online by ensuring they have the skills and understanding to spot and deal with it.

Recommendation Amend the Online Safety Bill to give Ofcom a refreshed and more focussed digital media literacy duty, and require the largest platforms to take steps to improve the media literacy of their users. Deliver a step change in the levels of funding being dedicated to promoting online media literacy by the government and the regulator.


Good media literacy is the first line of defence for us all from bad information online, giving people the ability to access, evaluate, and use information critically and responsibly. It can make the difference between decisions based on sound evidence, and those based on poorly informed opinions, that can harm health and wellbeing, social cohesion, and democracy.

In the digital context, media literacy should be about more than understanding and being able to use technology. It must be about ensuring that UK citizens have the critical skills that allow them to question where information has come from, how it has reached them, and how they should use it.

In their report Digital Technology and Resurrection of Trust, the House of Lords Select Committee on Democracy and Digital Technologies defined digital media literacy as “being able to distinguish fact from fiction, including misinformation, understand how digital platforms work, as well as how to exercise one’s voice and influence decision makers in a digital context.”[164]

On this measure we are failing. The UK has a vast literacy skills and knowledge gap, as Ofcom’s own recent research[165] demonstrates:

  • A third of internet users were unaware of the potential for inaccurate or biased information online.
  • 30% of internet users didn’t even know – or did not think about – whether the information they find online is truthful or not.
  • Although seven in ten (69%) adult internet users said they were confident in judging whether online content was true or false, most were actually unable to correctly evaluate the reasons that indicate whether a social media post is genuine.

This vulnerability is often even more acute amongst young people. A survey conducted as part of the Commission on Fake News and the Teaching of Critical Literacy Skills in Schools,[166] showed that a significant percentage of children and young people struggled to correctly identify fake news stories presented to them. With half of teachers feeling that the national curriculum does not equip children with skills to do so.

The Government’s own Online Media Literacy Strategy states that research shows that UK internet users ‘lack the critical thinking skills required to spot online falsehoods’ and that there is a clear need to upskill users.[167]

This gap leaves a population of citizens inadequately protected against the harms of misinformation. This can have serious consequences, particularly when the misinformation in question poses a risk to people’s health or security.

Prioritise better digital and media literacy in the UK given the strong case to do so

An assessment of the available research in this area commissioned by Ofcom shows that three specific types of media literacy skills in particular (critical thinking, evaluation strategies, and knowledge of the operation of news and media industries) have consistently been found to have positive effects on people’s ability to critically engage with misinformation.[168] That assessment also demonstrated that studies consistently identified that perceptions of source credibility (trustworthiness and believability) and the ability to critically evaluate the quality of sources, were important factors that underpin effective media literacy skills and influence attitudes towards misinformation.

The realities of this were demonstrated during the Covid-19 public health crisis where research showed that those with higher digital literacy were better at assessing the veracity of health-related statements.[169]

Enhancing people’s resilience is also important because it is increasingly clear that we cannot rely on the actions of platforms alone. When bad information disseminates at scale, platforms are heavily reliant on the use of algorithmic rather than human-based content moderation. Experience during the Covid-19 pandemic showed that increased reliance on algorithms led to substantially more content of this type being incorrectly identified, posing risks not just to people’s health but also to their freedom of expression. The subtleties and context dependent nature of misinformation presents challenges for automated systems, particularly when it relates to new phenomena such as Covid-19.[170]

The causes of vulnerability to misinformation are complex and multi-factored and will depend both on the traits of individuals and of the nature of content they are exposed to. The response too must therefore be multifaceted. Well designed, regulated and transparent content moderation approaches (including ensuring human involvement in decisions) will play a role, but enhancing media literacy must also be an important tool in that armoury if we are to increase users’ skills and resilience.

Deliver solutions with clearer legislation and better resources

Although Full Fact is supportive of the Government’s Online Media Literacy Strategy, it is far stronger on diagnoses than it is on setting out action to deliver a cure. There remains a deficit of leadership in this area, and responsibility for ensuring action remains too fragmented. There are a number of important players, including platforms themselves, but stronger leadership and a more cohesive focus is needed to deliver effective digital media literacy programmes that can be applied at sufficient scale to make an impact. And to ensure that platforms themselves help to ensure that their users are sufficiently upskilled. This will require a robust modernised legislative framework and a step change in the level of resources dedicated to enhancing media literacy in the UK.

Reset the essential clear legislative underpinning

Back in 2003 the Communications Act gave Ofcom a duty to promote media literacy.[171] That duty was designed for a different world. It provides little by way of pressure or expectations about how the regulator delivers on its duty. Nor does it reflect the huge forthcoming expansion of Ofcom’s role as the regulator of online services. The Online Safety Bill presented an opportunity for that to change.

Initially the signs were promising. The Draft Bill that was presented for pre-legislative scrutiny contained a proposed new media literacy duty for Ofcom (Clause 103) to replace the existing one in section 11 of the Communications Act. As well as updating the duty for the modern online era, the proposals included additional provisions requiring Ofcom to carry out, commission or encourage educational initiatives designed to improve the media literacy of members of the public, and to prepare guidance on evaluating media literacy related initiatives.

Although this was a welcome move there was an opportunity to go further and the Joint Committee tasked with scrutinising the draft legislation recommended that the approach to media literacy in the Bill should be strengthened, including making Ofcom responsible for setting minimum standards for media literacy initiatives.[172]

Contrary to these recommendations, the Government chose instead to drop the new media literacy duty from the version of the Bill that was introduced to Parliament. Rather than strengthening the approach the Bill now contains no active requirements for either the regulator or internet companies when it comes to improving media literacy.

The Government’s justification for this move was that the clause would have created unnecessary regulation and that it is unneeded now that Ofcom has published its new Approach to Media literacy.[173] This presents two obvious issues.

Firstly, this approach will mean that the statutory obligations will have remained unchanged for almost 20 years, despite the advances since 2003 and the extensive recasting of Ofcom’s regulatory responsibilities under the Online Safety Bill.

Secondly, the lack of enhanced statutory obligations will leave the regulator free to reduce or modify its media literacy activities at any time, potentially reverting to the sort of approach that prompted the Government to include a new statutory duty in the Draft Bill in the first place.

As Ofcom takes on its new statutory obligations as the online safety regulator it will inevitably and understandably face pressures on its time, resources and budget. Without greater statutory underpinning there is a risk that media literacy remains a second order priority. This would be a mistake. The draft Bill’s media literacy provisions needed to be strengthened, not cut.

The Government should introduce a new, stronger Ofcom media literacy duty for Ofcom, based on specific objectives including building resilience to misinformation and disinformation. This should be supported by new statutory obligations requiring the regulator to produce a strategy for delivering on their new duty, and report on the progress being made.

Require social media platforms to play a role

It is not just the regulator that must play a role under the legislation. At present the Bill places no requirements on the platforms themselves when it comes to enhancing the media literacy of their users or increasing their ability to use the platforms safely.

The Bill should also be amended to require the largest platforms to protect their users by increasing their media literacy, so that they understand how the platform works, and how they can identify and deal with bad information they encounter. This could include using the functionality of the service to ensure that users are better equipped to establish the reliability and accuracy of content that they encounter on the service, and understand how to locate accurate and impartial information from authoritative sources (on the service or elsewhere). Such a duty should be supported by an Ofcom code of practice with recommendations about how platforms can best comply.

Increase resourcing for digital and media literacy for the step change needed

As the Government’s own strategy sets out, it is more important than ever that citizens have access to, and are engaging with, media literacy support to stay safe online. And it is important to acknowledge that the second year of the Government’s ‘Action Plan’ 2022/23 has seen funding for its media literacy programme increased to £2 million. But in reality this is a small sum given the scale of the challenge. A much more significant uplift in resourcing is required to meet need and demand and to ensure swathes of the population not left at unnecessary risks of harm.

It is less clear what resources are being directed to this work by Ofcom or how that is likely to change when Ofcom starts to receive funding to cover its new regulatory responsibilities. Ofcom's Annual Report and Accounts[174] do not provide a breakdown of the regulator’s media literacy spend. Neither does the proposed plan of work for 2023/24[175] provide any indication of the levels of funding that will be allocated going forward.

Ofcom must allocate sufficient funding to the performance of its media literacy duties going forward and this must be set out transparently so that people can properly assess the regulator’s approach. Given the regulator’s soon to be expanded remit, and the huge developments that have occurred in the digital age since the original duty was introduced almost 20 years ago, that will require a step change in the levels of resources currently being dedicated by the regulator.

As we have set out above, Full Fact believes this should be underpinned with clearer and stronger statutory requirements in the Online Safety Bill, but if this does not happen then Ofcom must ensure that it properly ring-fences funding for the delivery of an enhanced media literacy strategy under a combination of its existing media literacy framework and new online safety duties.

Action for Government and Parliament

Amend the Online Safety Bill to introduce a new harm-based media literacy duty for Ofcom with clear objectives, along with a statutory strategy for delivering on it.

Amend the Bill to require the largest platforms to promote media literacy and ensure that their users are able to use the services safely.

Provide a step change in the levels of funding being dedicated to promoting media literacy.

Ensure that media literacy is given higher cross government priority to ensure that the relevant departments (particularly DSIT, DCMS and DfE)[176] co-operate effectively, both interdepartmentally and with the regulator, and with demonstrably better results.

Action for the regulator

Ofcom should bring greater clarity to its intended results in digital and media literacy (and not simply set out what activities and outputs it will produce).

Ofcom should be more transparent on the resources it is dedicating to media literacy and demonstrate how it will increase that resourcing year on year.

Chapter 12: Make the future online regulatory framework work to address harmful misinformation

A proactive approach is needed to make the most out of the forthcoming regulatory framework while ensuring that it is improved to better address bad information in timely and effective ways

Recommendation The Government should bring clarity to its regulatory agenda on harmful misinformation, including around the risk of generative AI, and give full space to Ofcom around online safety regulation. The regulator should make full use of all the tools and levers available to it under the new regime as well as pushing for the regulatory framework improvements that will be needed to better address harmful misinformation and disinformation.


The failures of the Online Safety Bill on misinformation and disinformation must be acknowledged

As we have said in more detail earlier in this report - despite experiences such as the pandemic and the invasion of Ukraine, the Online Safety Bill will not effectively tackle harmful misinformation and disinformation. This risks continued harm to individuals, the undermining of public health, and long-term damage to public debate.

Nevertheless, it is important not to see the Online Safety Bill as a single moment for online regulation, at which there is one shot at properly addressing all of the issues relating to online platforms. Most regulatory regimes rely on incremental adjustments made in light of increased understanding of real world impact. If the Government fails to properly regulate harmful misinformation and disinformation now through the Bill, then those who understand the issues must maintain the pressure in the knowledge that future opportunities will emerge or need to be created to address failings in the regulatory regime.

The nature of online platforms, and the speed at which the technology and new business models emerge and grow, means that flexibility will be even more important when it comes to regulatoring this sector. The system must remain dynamic, and this and future governments must keep the legislation underpinning it under constant review.

Ofcom must play a vital role in understanding what changes may be needed

The regulator should ensure that the regulatory system adapts effectively. The steady narrowing of the draft legislation means that Ofcom’s remit is ultimately likely to be curtailed to some degree when it comes to online misinformation and disinformation. However, its role as the regulator of online platforms will give it unrivalled exposure to, and understanding of, the online ecosystem.

It must use its transparency reporting powers to ensure that the regulator is getting as much insight as possible into the extent of and emerging patterns in online misinformation, the nature and extent of the associated harm, and how platforms are addressing it.

Ofcom will also need to ensure that its research functions are appropriately targeted, and that the potential of the Advisory Committee on Disinformation and Misinformation is maximised to ensure that Ofcom is getting clear and timely advice not just about the exercise of its existing regulatory functions, but also the wider problems that may exist or be emerging beyond its regulatory perimeter.

Ofcom must not be afraid to use this insight to make recommendations to government about what adjustments would be needed to the regulatory regime to more effectively tackle harmful misinformation and disinformation, including the powers Ofcom would require as regulator. This will also require Ofcom to ensure that it transparently commits adequate funding to support this work and that setting up the advisory committee is prioritised within its plan of work.

The next government will have unfinished business on online regulation

No credible actor has seen the Online Safety Bill as future proof.

Labour said at the start of the year that it would attempt to amend the Online Safety Bill to something closer to its original form in its remaining passage in Parliament, but should that effort not work, an incoming Labour government would legislate as soon as possible after the next general election to address problems with harmful material that is no longer addressed under the Bill.[177] This commitment also covers increasing regulator powers around the accountability of companies beyond simply setting their own terms and conditions, algorithms and transparency.[178] This intention has been stated in both the House of Commons[179] and the House of Lords.[180]

Whichever party or parties form the next government after the general election, the need for better regulation to tackle harmful misinformation and disinformation will not go away. In part this will be due to developments elsewhere.

Make the most of progressive regulation and measures at EU level

UK regulation will not exist in isolation. The emergence of regulatory frameworks in other countries and in particular the Digital Services Act (DSA) in the European Union (EU)[181] represents huge changes in the way that platforms are regulated across the world. As the gatekeeper to a large and affluent market of 450 million people, the EU can leverage its role and intends to be a leader in regulating the digital sphere, with other countries following its example. The Online Safety Bill has some similarities with the Digital Services Act: both take a risk and mitigation approach to regulation, and both intend to place proportionate obligations on platforms depending on their size.

The EU began to increase internet platforms’ accountability for online disinformation with the introduction of the Code of Practice on Disinformation in 2018: a voluntary Code developed and signed by online platforms, advertisers, fact-checkers, researchers and civil society organisations.[182][183] The 2022 iteration of the Code obliges some of its signatories to work with fact-checkers to extend fact-checking coverage across all EU member states.

This means that services which host user generated content have committed to integrating fact-checks on their platforms, for example via labels, information panels or policy enforcement, and explicitly including programmatic advertising systems and video content. Moreover, the Code works towards ensuring fair financial contributions for fact-checkers' work and better access to fact-checkers to information facilitating their daily work, such as impact metrics.

The Code was updated in 2022 and will become part of a broader regulatory framework that encompasses legislation on transparency and targeting of political advertising, and, for larger platforms, the intention is that it will become a Code of Conduct recognised under the DSA.

The DSA has entered into force, and will become fully directly applicable in every EU member state by February 2024. It establishes that very large online platforms (with more than 45 million users) will fall under the jurisdiction of the EU Commission rather than their country of incorporation. Many of the DSA’s articles are focused on illegal content, but the assessment and mitigation of systemic risks arising out of legal but harmful content is also in scope through Articles 34 and 35. Very large online platforms must take effective mitigation against systemic risks and apply their own terms and conditions, and can be invited to participate in Codes of Conduct including the Code of Practice on Disinformation, and must participate in a crisis response mechanism.[184]

The Code of Practice on Disinformation, whilst voluntary, will have regulatory traction through its interaction to the DSA. Compliance with the Code will be considered by the European Commission when evaluating if very large online platforms and search engines are taking the effective risk mitigation efforts that DSA mandates or whether they should face fines.

Civil society organisations and counter-disinformation experts in the EU have been working to ensure no ‘media exemption’ was given to media outlets around content moderation arguing that media should be equally accountable when it disinforms and that the proposed self-declaration process allowed a loophole for rogue actors to gain protection.[185] However, the case is not fully closed: a consultation on the European Media Freedom Act has opened the debate again, with civil society organisations warning that Article 17 risks bringing back a media exemption.[186],[187]

In February 2023, the signatories of the Code of Practice on Disinformation delivered their first baseline reports on implementation[188] in the new Transparency Centre[189]. Twitter has already been singled out for providing an insubstantial submission: the Commission has announced that Twitter’s report is short of data and has no information on its commitments to empower the fact-checking community.[190]

Under the DSA, companies the Commission designate as very large online platforms or search engines must comply with their obligations, including carrying out and providing their first annual risk assessment exercise, by June 2023.[191]

Whilst the UK implications of the DSA remain somewhat uncertain, it may be that platforms decide to do in the UK what they do in the EU in many areas in part simply as it makes their operations easier. However, in some cases the lack of a UK obligation may see them sidestep action. A platform in the EU having regulatory incentives to work better with fact checkers in a way that does not exist in the UK is just one example where the environment for promoting good information is less developed in the UK regulatory landscape. Full Fact is working with allies across Europe, both in and outside the EU, and will press for effective policy adoption in the UK based on developments elsewhere. [192]

Be ready for the risks and benefits of generative AI

When ChatGPT became the fastest-growing consumer app in history, fears were raised that such systems could easily be misused (including for disinformation) and that spreading harmful misinformation on a massive scale would be a consequence. This part of a much wider public debate on AI must come to the fore.

In highlighting the risks of AI, Full Fact CEO Will Moy recently told the House of Commons Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation inquiry on misinformation and trusted voices that “the ability to flood public debate with automatically generated text, images, video and datasets that provide apparently credible evidence for almost any proposition is a game changer in terms of what trustworthy public debate looks like, and the ease of kicking up so much dust that no one can see what is going on. That is a very well-established disinformation tactic.”[193]

As Mira Murati, chief technology officer at OpenAI, told TIME magazine in February, it is not too early for policymakers and regulators to get involved.[194] Indeed, internet platforms and policy makers need to consider how they interact with these developments from here on in. Full Fact believes it is important for choices to be made in a clear and transparent way with democratic oversight to aid audiences’ trust in the choices being made, with any models and technology used being made available to independent researchers for independent review.

Alongside platform and policy oversight, the role of citizens should be at the forefront as active stakeholders. Media literacy will be a core part of how we adjust to the evolution of technologies. Ensuring that credible effective programmes are available, and that they are evaluated and can evolve rapidly, is no small task and will require considerable resource and coordination. The recent context on media literacy (see Chapter 11) raises concerns about the degree to which efforts to ramp up media literacy are commensurate with the transforming information environment.

In the multi sector and jurisdiction space like generative AI we need citizen-supporting initiatives on a new scale and opportunities for the public and civil society to shape related policy in the UK, including on misinformation and disinformation.

Thierry Breton, the European Commissioner for the Internal Market, said in February that proposed EU rules regulating AI will tackle concerns around the risks of products like ChatGPT and that a solid regulatory framework is needed to ensure trustworthy AI based on high-quality data.[195] The Commission is working with the European Council and European Parliament on the legal framework for AI. Under draft EU rules, general purpose AI systems including those that are generative are looked at through levels of risk they may present which will determine what compliance with the proposed AI Act is required.

The UK’s proposed approach to AI regulation was set out in its July 2022 policy paper[196], but that had no references to generative AI. The UK Government strategy has been due to be set out in a White Paper that was promised in late 2022 but remains ‘forthcoming’.

A written question in February 2023 asked whether predictive text engines such as ChatGPT and Google's LaMDA BARD to be within the scope of the Online Safety Bill; and if not, what other measures they will introduce to hold companies responsible for the operation of such software. The government minister answered that the Bill ‘will apply to companies which enable users to share content online or to interact with each other, as well as search services. Content generated by artificial intelligence ‘bots’ is in scope of the Bill, where it interacts with user-generated content, such as on Twitter. Search services using AI-powered features will also be in scope of the search duties outlined in the Bill’. [197] However, since the Government has narrowed the scope of the Bill, many foreseeable harms from content derived from generative AI will not be covered, particularly in the area of misinformation and disinformation. In this sense, the Online Safety Bill is no longer future-proof when it comes to online safety.

The Online Safety Bill is one of a number of pieces of legislation underway or announced that will impact on the UK information environment. From Digital Markets to the Competition and Consumer Bill and AI and beyond, the present legislative and regulatory backdrop includes potential law and regulation that will or could address problems in our information environment. Upcoming laws in data and digital should not become missed opportunities for proportionate responses to bad information, and the challenges not sidestepped again.

Action for government

The Department for Science, Innovation and Technology (DSIT) should ensure that there is an early post implementation review of the legislation to assess how well it is working, and to identify areas where the regime needs to be further developed or improved. This should include horizon scanning for future threats that would not be addressed under the existing regime.

The UK’s proposed approach to AI regulation must explicitly address challenges around generative AI, including how it intends to take forward solutions to the associated risks around harmful misinformation and disinformation. These must be formed in consultation with citizens and civil society. The models and technology being used also need to be made available to independent researchers for independent review.

Action for Ofcom

The regulator must utilise its powers under the new regime to ensure it has maximum sight of the issues arising on platforms when it comes to harmful misinformation and disinformation, and use that to understand and explain how the regulatory framework can best be improved to address them.

Action for Parliament

The new House of Commons Science, Innovation and Technology committee should prioritise work on the future of the online regulatory framework and harmful misinformation, building on previous relevant inquiries, including that on AI, and be forward thinking about the role of government, learning from other jurisdictions, and what scrutiny is required for DSIT on these issues.


References

[5] Full Fact, 9 March 2022, ‘Government has not backed up Russian sanction claims’. fullfact.org/economy/russia-ukraine-bloomberg-sanctions

[6] Full Fact, 1 February 2023, ‘Government fails to back up PM’s claim that A&E patient flow is 'faster than ever'’. fullfact.org/health/Rishi-Sunak-patient-flow-discharge-rates

[7] Fighting the causes and consequences of bad information: the Full Fact Report 2020. fullfact.org/blog/2020/apr/full-fact-report-2020

[13] National Centre for Social Research, Public Confidence in Official Statistics 2021, April 2022. natcen.ac.uk/publications/public-confidence-official-statistics-2021

[14] Office for Statistics Regulation, Regulatory guidance for the transparent release and use of statistics and data, February 2022. osr.statisticsauthority.gov.uk/publication/regulatory-guidance-for-the-transparent-release-and-use-of-statistics-and-data

[15] Office for Statistics Regulation, Code of Practice for Statistics: What is voluntary application? code.statisticsauthority.gov.uk/voluntary-application

[16] Full Fact, 12 October 2022, ‘Evidence for claim that 60% of small boat arrivals are Albanian not yet published’. fullfact.org/immigration/home-office-albania-small-boat-crossing-60-percent

[17] Letter from Full Fact to the Home Secretary, 13 December 2022. fullfact.org/media/uploads/priti_patel_letter_small_boats_claim_12_2022.pdf

[18] Full Fact 22 November 2022, ‘No published data to support minister’s claim about migrants saying they’re under 18’. fullfact.org/immigration/robert-jenrick-fifth-male-migrants-under-18

[24] Full Fact, 8 March 2022, ‘£24bn in extra defence spending will be spread over four years’. fullfact.org/economy/2022-defence-spending-increase

[25] Full Fact, 19 October 2018, ‘Is £84 billion being spent on the NHS?’ fullfact.org/health/84bn-spent-NHS

[26] Full Fact, 7 July 2020, ‘The government’s education funding figures need context’. fullfact.org/education/school-funding-figures-context

[29] Full Fact, 23 December 2021, Priti Patel’s tweet on police funding doesn’t account for inflation’. fullfact.org/crime/priti-patel-home-office-police-funding-tweet

[31] Code of Practice for Statistics Edition 2.1 (as revised 5 May 2022) code.statisticsauthority.gov.uk/wp-content/uploads/2022/05/Code-of-Practice-for-Statistics-REVISED.pdf

[37] Liaison Committee: Oral Evidence from the Prime Minister, 30 March 2022, HC 1211. committees.parliament.uk/oralevidence/10037/default

[39] House of Commons, Procedure Committee, Corrections to the Official Report, Second Report of Session 2006–07, publications.parliament.uk/pa/cm200607/cmselect/cmproced/541/54104.htm#a10

[40] Correspondence between Full Fact and House of Commons library, July 2022.

[41] On 6 July 2022, Boris Johnson MP repeated the false claim that Sir Keir Starmer MP had voted 48 times to take the UK back into the European Union.

[42] On 5 January 2022, Boris Johnson MP wrongly claimed that the Government supports 2.2 million households with a £140-a-week discount on energy bills.

[43] On 15 December 2021, Boris Johnson MP falsely stated that the percentage of the UK population who have received a booster vaccine is double that of any other European country.

[44] Timeline of economic bad information: 2021-22 Here to lead, not mislead fullfact.org/media/uploads/employment_claim_timeline_-_detailed_with_sources-4.pdf

[45] House of Commons, Boris Johnson MP, 30 March 2022, The Liaison Committee, committees.parliament.uk/oralevidence/10037/default

[46] Full Fact, 7 February 2023, No evidence that Just Stop Oil ‘bankrolls’ the Labour Party, fullfact.org/news/just-stop-oil-funding-labour-pmqs

[47] House of Commons Code of Conduct and Guide to Rules parliament.uk/business/publications/commons/hoc-code-of-conduct/

[48] Public Health debate, House of Commons, Hansard 14 December 2021 hansard.parliament.uk/commons/2021-12-14/debates/8034393B-C568-4DE6-8695-1D63F957537E/PublicHealth

[50] Full Fact Public Attitudes Research June 2021 fullfact.org/media/uploads/full_fact_report_121021.pdf

[52] Ipsos Issues Index: June 2022 ipsos.com/en-uk/ipsos-issues-index-june-2022

[53] Compassion in Politics, 2022, End the lies, change.org/p/uk-parliament-end-the-lies

[54] The Committee on Standards in Public Life, November 2021, Upholding Standards in Public Life, Final report of the Standards Matter 2 review, assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1029944/Upholding_Standards_in_Public_Life_-_Web_Accessible.pdf

[55] House of Commons Procedure Committee inquiry on correcting the record committees.parliament.uk/work/6794/correcting-the-record/publications

[56] Written evidence submitted by Full Fact to the Procedure Committee’s inquiry on correcting the record, September 2022 committees.parliament.uk/writtenevidence/111164/pdf

[57] House of Commons, Committee on Standards, 23 November 2021, Review of the Code of Conduct: proposals for Consultation, Fourth Report of Session 2021–22, committees.parliament.uk/publications/7999/documents/82638/default

[58] Full Fact, 12 January 2023, ‘Andrew Bridgen wrong to call mRNA vaccines gene therapy’. fullfact.org/health/andrew-bridgen-gene-therapy-vaccines

[60] Tweet of 18 November 2022 (making correction). twitter.com/KarlTurnerMP/status/1593589100518395905

[61] Full Fact, 10 October 2022, ‘Nadhim Zahawi wrong to say Moderna booster protects against both Covid-19 and flu’. fullfact.org/health/nadhim-zahawi-covid-flu-booster

[62] Tweet of 10 October 2022 (with correction). twitter.com/nadhimzahawi/status/1579487343391956992

[63] BBC Corrections and Clarifications - Archive 2022 bbc.co.uk/helpandfeedback/corrections_clarifications/archive-2022

[64] BBC iPlayer, ‘Sunday with Laura Kuensberg’ (aired 9th October 2022). bbc.co.uk/iplayer/episode/m001d0gb/sunday-with-laura-kuenssberg-09102022

[65] Full Fact, 26 September 2022, ‘Liz Truss wrong to claim ‘no household’ will pay more than £2,500 on energy bills.’ fullfact.org/economy/Truss-energy-price-guarantee

[66] Full Fact, 29 September 2022, ‘Liz Truss wrong to repeatedly say energy bills are capped at £2,500’. fullfact.org/economy/liz-truss-energy-price-cap-2500

[67] Full Fact, 26 October 2022, ‘How the media misreported the '£2,500 energy bill cap'’. fullfact.org/economy/energy-price-guarantee-misinformation-september

[68] Uswitch, 22 September 2022, ‘Price Guarantee Confusion: 40% of households wrongly believe their energy bill can't exceed £2,500’. uswitch.com/media-centre/2022/09/price-guarantee-confusion

[69] It is possible that the individual BBC Radio stations issued on air corrections that we are not aware of. No correction was issued on the BBC’s Corrections and Clarifications page of its website. bbc.co.uk/helpandfeedback/corrections_clarifications

[72] In 2019 Full Fact fact checked manifestoes by the Conservatives, Labour, Liberal Democrats, the Scottish National Party, the Brexit Party and the Green Party. Our coverage of that election can be be accessed here: fullfact.org/election-2019/all

[77] One of the key pledges of the 2019 Conservative manifesto, to build “40 new hospitals”, has been widely contested since, including on the basis of what constitutes a new hospital. fullfact.org/election-2019/conservative-manifesto-2019; fullfact.org/health/48-new-hospitals

[78] The 2019 Conservative manifesto promised 50,000 more nurses, although it didn’t say when this would be delivered (nor what was reported after the manifesto was launched, that it would include many thousands of existing nurses who will be encouraged to remain. fullfact.org/election-2019/conservative-manifesto-2019

[79] For example, in 2019 Labour spoke of 96,000 vacancies in the NHS which does not mean that no-one was doing those jobs (NHS Improvement had previously said that between 90-95% of these vacancies were being filled by temporary staff). fullfact.org/election-2019/labour-manifesto-2019

[80] For example, Labour said in its 2019 manifesto that “recorded crime has risen, including violent crimes”, but while it was true that the number of crimes and violent crimes recorded by the police have risen, those figures were not what was really happening. They largely reflected improved recording practices, and overall levels of crime were broadly stable and there had been little change in overall levels of violent crime, although some rarer but higher-harm offences like knife crimes had shown signs of increasing. fullfact.org/election-2019/labour-manifesto-2019

[81] Neither Conservatives nor Labour are properly spelling out consequences of their policy proposals, IFS (2017) ifs.org.uk/news/neither-conservatives-nor-labour-are-properly-spelling-out-consequences-their-policy-proposals

[82] CPB Netherlands Bureau for Economic Policy Analysis, ‘what we do. cpb.nl/en/what-we-do

[83] The Budget Responsibility and National Audit Act 2011, c.4. legislation.gov.uk/ukpga/2011/4/schedule/1/enacted

[84] Asking the OBR to cost manifestos could make sense – but would be complicated, Institute for Government, 19 November 2019 instituteforgovernment.org.uk/blog/it-makes-sense-ask-obr-cost-manifestos

[85] Institute for Government, Costings of opposition policies are legitimate – but not during an election campaign, 6 November, 2019, instituteforgovernment.org.uk/article/comment/costings-opposition-policies-are-legitimate-not-during-election-campaign

[86] Whilst a full system may not be in place in time for the next election, a commitment ahead of the country going to the polls would demonstrate openness to being held to account.

[87] Full Fact, 13 December 2019, ‘General Election 2019, fact checked’. fullfact.org/blog/2019/dec/general-election-2019-fact-checked

[88] The Electoral Commission, 2020, UK Parliamentary General Election 2019, electoralcommission.org.uk/sites/default/files/2020-04/UKPGE%20election%20report%202020.pdf

[89] The Coalition for Reform in Political Advertising, Illegal, Indecent,Dishonest and Untruthful: How political advertising in the 2019 General Election let us down, December 2019 reformpoliticaladvertising.org/wp-content/uploads/2021/05/Illegal-Indecent-Dishonest-and-Untruthful-The-Coalition-for-Reform-in-Political-Advertising.pdf

[90] Report of the 2022 work of the Election Advertising Review Panel convened by Reform Political Advertising on which Full Fact was a member, cms.reformpoliticaladvertising.co.uk/wp-content/uploads/2022/05/Reform-Political-Advertising.-COST-OF-LYING-CRISIS-1.pdf

[91] Full Fact, 21 April 2022, ‘Labour election leaflets and ads wrongly claiming families are ‘£2,620 worse off’’, fullfact.org/economy/labour-election-leaflets-2620-cost-of-living.

[92] Full Fact, 8 April 2022, ‘Keir Starmer wrong to say families will be £2,620 worse off this year’ fullfact.org/economy/labour-election-leaflets-2620-cost-of-living

[93] Full Fact, 13 September 2019, ‘The ‘headline’ on this BBC article linked to by a Conservative Party Facebook ad isn’t the real headline’. fullfact.org/news/conservative-ad-headline

[94] There is a slightly different system for ads on broadcast media (TV and radio) which are regulated by ASA under a contract with Ofcom. However, this aspect of regulation is not relevant here—because political advertising is banned on broadcast media.

[95] The UK Code of Non-broadcast Advertising and Direct & Promotional Marketing (CAP Code) rule 7 Political advertisements asa.org.uk/type/non_broadcast/code_section/07.html

[98] House of Lords Select Committee on Democracy and Digital Technologies, Digital Technology and the Resurrection of Trust, Chapter 2., 29 June 2020, publications.parliament.uk/pa/ld5801/ldselect/lddemdigi/77/7706.htm#_idTextAnchor013

[99] The New Zealand ASA is the organisation that sets the standards for responsible advertising in that country asa.co.nz

[102] Reform Political Advertising, YouGov data shows 87% of the UK public support rules for factual claims in political ads, 13 December 2019, reformpoliticaladvertising.org/yougov-data-shows-87-of-the-uk-public-support-rules-for-factual-claims-in-political-ads

[103] Full Fact, 6 November 2019, ‘Lib Dem leaflet falsely attributes pro-Lib Dem quote to the Guardian.’ fullfact.org/news/lib-dem-leaflet-false-quote

[104] BBC, 20 November 2019, ‘Election debate: Conservatives criticised for renaming Twitter profile 'factcheckUK'. bbc.com/news/technology-50482637

[105] The Electoral Commission, 20 November 2019, ‘Statement on @CCHQPress Twitter rebrand’. electoralcommission.org.uk/media-centre/statement-cchqpress-twitter-rebrand

[107] Full Fact Report 2020, Fighting the causes and consequences of bad information. fullfact.org/media/uploads/fullfactreport2020.pdf

[108] Clause 13(6)(b) defines it as content which “is or appears to be specifically intended to contribute to democratic political debate in the United Kingdom or a part or area of 15 the United Kingdom.”

[109] Local Government Association briefing for the Online Safety Bill House of Lords Second Reading. local.gov.uk/parliament/briefings-and-responses/online-safety-bill-second-reading-house-lords-1-february-2023

[111] Government of Canada, Report on the assessment of the Critical Election Incident Public Protocol, 20 November 2020, canada.ca/en/democratic-institutions/services/reports/report-assessment-critical-election-incident-public-protocol.html

[112] Meta, Facebook Community Standards Misinformation, transparency.fb.com/en-gb/policies/community-standards/misinformation

[114] Meta, How Facebook Has Prepared for the 2019 UK General Election, 7 November 2019, about.fb.com/news/2019/11/how-facebook-is-prepared-for-the-2019-uk-general-election

[116] YouTube, How does YouTube support civic engagement and stay secure, impartial and fair during elections? youtube.com/intl/ALL_uk//howyoutubeworks/our-commitments/supporting-political-integrity/; Google, Written evidence to the House of Lords Committee on Democracy and Digital Technologies Democracy and Digital Technologies Inquiry, 28 February 2020, committees.parliament.uk/writtenevidence/454/html

[117] Twitter, Civic integrity misleading information policy, help.twitter.com/en/rules-and-policies/election-integrity-policy

[118] Twitter, UK election conversation attracts over 15m Tweets, 19 December 2019, blog.twitter.com/en_gb/topics/company/2019/uk-election-conversation-attracts-over-fifteen-million-tweets; Twitter, Serving the public conversation for #GE2019, 11 November 2019, blog.twitter.com/en_gb/topics/events/2019/serving-the-public-conversation-for-ge2019

[119] TikTok, Election Integrity, tiktok.com/safety/en/election-integrity

[120] Government of Canada, Canada Declaration on Electoral Integrity Online, canada.ca/en/democratic-institutions/services/protecting-democracy/declaration-electoral-integrity.html

[122] The Full Fact Report 2020: Fighting the causes and consequences of bad information.. fullfact.org/blog/2020/apr/full-fact-report-2020

[123] The Full Fact report 2021: Fighting a pandemic needs good information. fullfact.org/about/policy/reports/full-fact-report-2021

[124] Written evidence submitted by Full Fact to the Draft Online Safety Bill Joint Committee, 20 September 2021, committees.parliament.uk/writtenevidence/39171/pdf

[125] The Full Fact report 2022: Tackling online misinformation in an open society—what law and regulation should do fullfact.org/about/policy/reports/full-fact-report-2022

[126] Written evidence submitted by Full Fact to the Online Safety Bill Public Bill Committee, 23 May 2022, publications.parliament.uk/pa/cm5803/cmpublic/OnlineSafetyBill/memo/OSB28.htm

[127] RESIST 2: Counter-disinformation toolkit, Government Communication Service gcs.civilservice.gov.uk/publications/resist-2-counter-disinformation-toolkit

[128] Even if misinformation was within scope of the user empowerment provisions, filtering out such content at scale would not be an effective or appropriate approach to dealing with the majority of misinformation.

[130] Letter from Paul Scully and Lord Parkinson to Baroness Stowell, 23 February 2023. committees.parliament.uk/publications/34184/documents/188087/default

[131] National Security Bill bills.parliament.uk/bills/3154

[132] At the point the Bill was introduced to the House of Lords in January 2023. bills.parliament.uk/publications/49376/documents/2822

[133] Clause 139 in the version of the Bill introduced to the House of Lords.

[134] Article 40 of Regulation (EU) 2022/2065 (the ‘Digital Services Act’) eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022R2065

[135] Reuters, Meta pauses new users from joining analytics tool CrowdTangle, 29 January 2022, reuters.com/technology/meta-pauses-new-users-joining-analytics-tool-crowdtangle-2022-01-29

[136] Cancer Research UK, There’s no conspiracy – sometimes it just doesn’t work, July 6 July 2011, news.cancerresearchuk.org/2011/07/06/theres-no-conspiracy-sometimes-it-just-doesnt-work, annualreviews.org/doi/10.1146/annurev-publhealth-040119-094127

[137] RETRACTED The Lancet, Vol 351, February 28, 1998: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. thelancet.com/journals/lancet/article/PIIS0140-6736(97)11096-0/fulltext

[138] The Lancet, Vol 353, June 12, 1999: Autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association. thelancet.com/journals/lancet/article/PIIS0140-6736(99)01239-8/fulltext

[139] Misinformation and Its Correction: Continued Influence and Successful Debiasing, Psychological Science in the Public Interest (PSPI), 2012, journals.sagepub.com/doi/10.1177/1529100612451018#body-ref-bibr70-1529100612451018

[140] Social Science & Medicine Volume 240, November 2019: Systematic Literature Review on the Spread of Health-related Misinformation on Social Media. sciencedirect.com/science/article/pii/S0277953619305465

[141] Full Fact fact checks about medical conditions, the NHS, social care and government funding of national health services fullfact.org/health

[142] Full Fact, 14 October 2021, ‘UK public as concerned by the spread of misinformation as immigration and Brexit and the EU’. fullfact.org/blog/2021/oct/uk-public-concerned-spread-misinformation

[143] Full Fact, 8 December 2020, ‘No evidence Pfizer Covid-19 vaccine affects women’s fertility’. fullfact.org/health/vaccine-covid-fertility; Full Fact 22 December 2020, ‘There’s no evidence the Pfizer vaccine interferes with the placenta’, fullfact.org/online/placenta-protein-vaccine; Full Fact, 8 October 2021, ‘What do we know about the AstraZeneca vaccine in pregnancy?’, fullfact.org/pregnant-then-screwed/AZ-vaccine-pregnancy; Full Fact, 25 August 2021, ‘PHE says no need to restart vaccination course in pregnancy after second dose delay’, fullfact.org/health/vaccine-second-dose; Full Fact, 22 September2021, ‘Do pregnant women get Covid-19 booster vaccines?’; fullfact.org/pregnant-then-screwed/boosters-in-pregnancy; Full Fact, 29 October 2021, ‘Why can you mix and match booster jabs in pregnancy?’, fullfact.org/health/health-pregnant-then-screwed-booster-mix-and-match.

[145] Full Fact fact checks about monkeypox. fullfact.org/health/monkeypox

[146] Full Fact, 4 January 2023, ‘Strep A deaths are not dangerous new strain caused by flu vaccines’, fullfact.org/health/strep-A-historic-deaths; Full Fact, 22 December 2022, ‘Nasal flu vaccines don’t contain ‘mice bred streptococcal bacteria’, fullfact.org/health/covid-tests-flu-vaccine-masks-strep; Full Fact, 20 December 2022, ‘Instagram post wrongly links nasal flu vaccines to strep A outbreak’, fullfact.org/health/flu-vaccines-strep-A-timings

[147] Full Fact, 18 July 2022, ‘Comparisons between Japan and US infant vaccination programs are inaccurate’, fullfact.org/health/Japan-US-vaccine-comparisons

[148] Full Fact, 15 December 2022, ‘Facebook post claiming lemons treat cancer better than chemotherapy is false’, fullfact.org/health/lemons-and-cancer

[149] Full Fact, 28 July 2022, ‘Tumours are not ‘there to save your life’, fullfact.org/health/cancer-tumour-causes

[150] Full Fact, 9 August 2022, ‘No solid proof cannabis oil can ‘cure’ cancer’, fullfact.org/health/cannabis-oil-cure-cancer

[151] Full Fact 27 January 2022, ‘Rubbing hydrogen peroxide over your body every day does not treat cancer’. fullfact.org/health/hydrogen-peroxide-cancer-treatment

[153] Cancer Research UK, Alternative therapies: what’s the harm?, 27 April 2015, news.cancerresearchuk.org/2015/04/27/alternative-therapies-whats-the-harm

[155] The Full Fact report 2021: Fighting a pandemic needs good information, fullfact.org/about/policy/reports/full-fact-report-2021

[156] Barbara Hanratty et al., ‘Covid-19 and Lack of Linked Datasets for Care Homes’, BMJ 369 (24 June 2020): 19, doi.org/10.1136/bmj.m2463.

[157] Annual Review of Public Health, Vol. 41, April 2020, ‘Public Health and Online Misinformation: Challenges and Recommendations’, annualreviews.org/doi/10.1146/annurev-publhealth-040119-094127

[158] ABC News, 7 December 2022, ‘COVID-19, vaccine misinformation 'spiking' on Twitter after Elon Musk fires moderators’, abc.net.au/news/science/2022-12-08/covid-misinformation-spiking-on-twitter-elon-musk/101742276

[159] The New York Times, 28 December 2022, ‘As Covid-19 Continues to Spread, So Does Misinformation About It’, nytimes.com/2022/12/28/technology/covid-misinformation-online.html

[161] YouTube Official Blog, 13 January 2021,‘New health content is coming to YouTube’, blog.youtube/news-and-events/new-health-content-coming-youtube.

[162] Reddit security: reddit.com/r/redditsecurity/comments/pfyqqn/covid_denialism_and_policy_clarifications. Reddit help: reddithelp.com/hc/en-us/articles/360043513151. Reddit content policy: redditinc.com/policies/content-policy. Tiktok newsroom, 28 September 2022, ‘An update on our work to counter misinformation’ newsroom.tiktok.com/en-us/an-update-on-our-work-to-counter-misinformation. Tiktok community guidelines: tiktok.com/community-guidelines?lang=en#37. Snapchat, 8 September 2022, ‘How We Prevent the Spread of False Information on Snapchat’, values.snap.com/en-GB/news/how-we-prevent-the-spread-of-false-information-on-snapchat. Snapchat Community Guidelines: values.snap.com/en-GB/privacy/transparency/community-guidelines.

[163] BBC Corrections and Clarifications, 2023, 13 January 2023, bbc.co.uk/helpandfeedback/corrections_clarifications

[164] Digital Technology and Resurrection of Trust, House of Lords Select Committee on Democracy and Digital Technologies, June 2020, HL Paper 77. committees.parliament.uk/publications/1634/documents/17731/default

[165] Adults' Media Use and Attitudes report 2022, Ofcom, March 31 2022. ofcom.org.uk/__data/assets/pdf_file/0020/234362/adults-media-use-and-attitudes-report-2022.pdf

[166] Fake News and Critical Literacy, Final Report of the Commission on Fake News and the Teaching of Critical Literacy Skills in Schools, June 2018, pp 10-12. cdn.literacytrust.org.uk/media/documents/Fake_news_and_critical_literacy_-_final_report.pdf

[168] Edwards, L. Stoilova, M., Anstead, N., Fry, A., El-Halaby, G. and Smith M. (2021) Rapid Evidence Assessment on Online Misinformation and Media Literacy: Final Report for Ofcom. ofcom.org.uk/__data/assets/pdf_file/0011/220403/rea-online-misinformation.pdf

[169] Understanding vulnerability to online misinformation, Alan Turing Institute, March 2021. turing.ac.uk/sites/default/files/2021-02/misinformation_report_final1_0.pdf

[170] The role of AI in addressing misinformation on social media platforms, Centre for Data Ethics and Innovation Policy Paper, August 2021. gov.uk/government/publications/the-role-of-ai-in-addressing-misinformation-on-social-media-platforms

[171] Section 11 of the Communications Act 2003.

[172] Joint Committee on the Draft Online Safety Bill, December 2021, HL Paper 129 - HC 609 (see in particular paras 195-196). committees.parliament.uk/publications/8206/documents/84092/default

[173] Government Response to the Report of the Joint Committee on the Draft Online Safety Bill, March 2022, CP 640 (see in particular paras 197-198). assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1061446/E02721600_Gov_Resp_to_Online_Safety_Bill_Accessible_v1.0.pdf

[175] Ofcom’s proposed plan of work 2023/24, December 2022. ofcom.org.uk/news-centre/2022/ofcoms-proposed-plan-of-work-2023-24

[176] The Department of Science Innovation and Technology (DSIT). The Department of Culture, Media and Sport (DCMS). The Department for Education (DfE).

[177] The Observer, 1 Jan 2023, ‘Labour pledges to toughen ‘weakened and gutted’ online safety bill’. theguardian.com/technology/2023/jan/01/labour-pledges-toughen-online-safety-bill

[178] The Independent, 14 Jan 2022, ‘Labour vows to hand ‘weak’ Rishi Sunak first defeat over Online Safety Bill’. independent.co.uk/news/uk/politics/sunak-labour-online-safety-bill-b2262673.html

[181] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) (Text with EEA relevance)(‘the Digital Services Act’). data.europa.eu/eli/reg/2022/2065/oj

[182] European Commision Press Release, ‘Disinformation: Commission welcomes the new stronger and more comprehensive Code of Practice on disinformation’ ec.europa.eu/commission/presscorner/detail/en/ip_22_3664

[183] The 2022 Code of Practice on Disinformation, European Commission website. digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation

[184] Article 36 of Regulation (EU) 2022/2065

[185] EU Disinfo Lab, ‘Policy statement on Article 17 of the proposed European Media Freedom Act’, January 24 2023 disinfo.eu/advocacy/policy-statement-on-article-17-of-the-proposed-european-media-freedom-act/

[186] European Commission website - safeguarding media freedom in the EU – new rules. ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13206-Safeguarding-media-freedom-in-the-EU-new-rules_en

[187] EU Disinfo Lab, ‘Disinfo Update 07/02/2023’. disinfo.eu/outreach/our-newsletter/disinfo-update-07/02/2023/

[188] European Commission Code of Practice on Disinformation - Transparency Centre, accessed 1 March 2023, Reports Archive, disinfocode.eu/reports-archive/?years=2023

[189] European Commission Code of Practice on Disinformation - Transparency Centre, accessed 1 March 2023, disinfocode.eu

[190] European Commission Press Release, 9 February 2023, ‘Code of Practice on Disinformation: New Transparency Centre provides insights and data on online disinformation for the first time’. ec.europa.eu/commission/presscorner/detail/en/mex_23_723

[191] European Commission Press Release, 16 November 2022, ‘DSA: landmark rules for online platforms enter into force’. ec.europa.eu/commission/presscorner/detail/en/IP_22_6906

[192] Full Fact is part of the European Fact-Checking Standards Network (EFCSN) which brings together Europe's fact-checking and open-source intelligence (OSINT) community to combat misinformation. eufactcheckingproject.com

[193] Digital, Culture, Media and Sport Sub-committee on Online Harms and Disinformation, Oral evidence, HC 597, Tuesday 24 January 2023. committees.parliament.uk/oralevidence/12582/pdf/

[194] Time, 5 February 2023, ‘The Creator of ChatGPT Thinks AI Should Be Regulated’. time.com/6252404/mira-murati-chatgpt-openai-interview

[195] Biztech News, February 9th 2023, ‘ChatGPT in the spotlight as the EU steps up calls for tougher regulation. Is its new AI Act enough?’. euronews.com/next/2023/02/06/chatgpt-in-the-spotlight-as-the-eu-steps-up-calls-for-tougher-regulation-is-its-new-ai-act

[197] Lord Parkinson of Whitley Bay, UK Parliament: Written answer, 17 February 2023, HL5570. questions-statements.parliament.uk/written-questions/detail/2023-02-08/hl5570

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.