The Online Safety Act must go further: Chi Onwurah MP for the Full Fact Report

11 July 2025

By Chi Onwurah MP,

Chair of the Science, Technology and Innovation Select Committee

Originally written as part of the 2025 Full Fact Report

After years of dither and delay, the previous government finally introduced the Online Safety Act to improve safety in an online space with few regulatory controls. Its goals included reducing illegal content, protecting children from harmful material, and holding tech companies accountable for the content they recommend.

However, the Act did not clearly address harms caused by content which is ‘legal but harmful,’ in part due to concerns over the impact on freedom of expression and definitions of ‘truth’. The Act does impose new duties on providers to implement systems and processes that mitigate the risks of illegal content or activity, or content harmful to children, appearing online.

Our inquiry heard detailed evidence on the role social media algorithms played in amplifying false and misleading content during the Southport riots. Evidence to this inquiry has brought to light how social media platforms can profit from crises such as the Southport riots—despite Meta, TikTok and X all claiming they did not. The recommender systems of these platforms prioritise engaging content, regardless of veracity or harm, to maximise time spent on them and divert attention to advertisements.

For this reason, one inquiry session focused on the digital advertising market. The social media companies we spoke to rely on advertising, which makes up between 80% and 98% of their revenues, with Google holding a dominant position in both the supply and demand side of the sector. We have learned how the digital advertising sector is overly complex and opaque, easily exploited by bad actors wishing to profit from false or harmful content. This was seen last summer when the fake news website ‘Channel3Now’ profited from spreading misinformation about the killer. While digital advertising is regulated by the industry-funded Advertising Standards Authority, with the CMA and Ofcom also holding powers, our inquiry has highlighted a potential regulatory gap in the process of online advertising that enables the monetisation of harmful content.

The inquiry will next hear from Ofcom, the Information Commissioners Office, and Department for Science, Innovation and Technology, where members can scrutinise whether the current Online Safety Act fully addresses the significant societal harms of misinformation. The government says it is serious about tackling online harms, but the platforms we heard from said they would not have behaved differently if the Online Safety Act was fully enacted. This suggests the Act would not prevent a repetition of the terrible riots last summer.

Our inquiry began by hearing from some of the community groups most impacted by the riots. We owe it to them, and to everyone else, to ensure it does not happen again. It is the Government’s duty to do so.

Online Safety Act 2023

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.