What the UK riots taught us about social media failure
By Zoe Manzi and Hannah Rose,
Hate and Extremism Analysts at the Institute for Strategic DialogueOriginally written as part of the 2025 Full Fact Report

The failure of social media platforms to curb the spread of false narratives in a timely manner, during the riots which took place after the Southport murders last year, may have significantly contributed to the offline violence and disruption which subsequently erupted across the UK.
Immediately after the attack, false claims began to emerge on X (formerly Twitter), TikTok and Facebook, erroneously identifying the perpetrator as a Muslim migrant, “Ali al-Shakati”.
Influential figures with large numbers of followers, including actor-turned-political activist Laurence Fox, further amplified this narrative, using it to call for anti-Muslim action, including the permanent removal of Islam from Great Britain. His post, which amassed over 850,000 views in the first 48 hours after the attack, exemplifies how misinformation is weaponised to incite hate. On X, such posts from paid premium users may be given preference by the platform recommender algorithm, allowing them to reach larger audiences. These findings demand investigation into how Terms of Service are applied to verified users, who should receive enhanced scrutiny during crises to prevent the amplification of harmful disinformation.
Despite police taking the unprecedented step of confirming the alleged perpetrator was a local 17-year-old, misinformation continued to circulate. TikTok’s search recommendations actively surfaced misinformation, suggesting queries like ‘Ali al-Shakati arrested in Southport’ long after the claim had been disproven. Repeating this exercise months later, analysts were still served conspiratorial content and disinformation about the Southport attack through the recommender algorithm. Transparency gaps persist in understanding the role of recommender systems in amplifying harmful content. While the EU’s Digital Services Act (DSA) legislates limited independent auditing of these systems, the UK’s Online Safety Act (OSA) does not, leaving UK users more vulnerable than our European neighbours.
Permissive platform environments allowed hate speech and conspiracy theories linking immigration to crime to spread and far-right networks to mobilise unhindered. On X, the use of anti-Muslim slurs more than doubled in the ten days following the Southport attack, with over 40,000 mentions. Across British far-right Telegram channels, anti-Muslim hate rose 276% and anti-migrant hate 246%. One X user with 16,000 followers and X premium status posted a protest flyer asserting that ‘children are being sacrificed on the unchecked altar of mass migration.’ These narratives attempt to provide justification for real-world violence, further demonstrating how misinformation and hate speech can have direct offline consequences.
To prevent similar incidents, platforms must develop explicit crisis response protocols to ensure rapid detection and mitigation of harmful misinformation and disinformation. These should include surge capacity during high-risk events, improved coordination with authorities, and a balance between swift action and human rights safeguards. Greater algorithmic transparency and auditing are needed to provide insight into how recommendation systems amplify content during crises, as the lack of independent oversight in the UK leaves users at greater risk of exposure to harmful content. More consistent enforcement of platform policies is also essential to prevent verified accounts and those with large followings from receiving preferential treatment that allows harmful misinformation to spread unchecked. Platforms must improve access to data for researchers and regulators, enabling external monitoring of harmful content trends and the effectiveness of moderation practices. Without meaningful access, addressing online harms remains difficult. Additionally, financial incentives that allow disinformation actors to profit must be addressed. Monetisation policies should be reviewed to prevent bad actors from gaining financial benefits through engagement-driven misinformation.
The speed at which false narratives spread, their amplification by recommendation algorithms, and the delayed response by social media platforms enabled a climate where digital propaganda fuelled real-world violence. The riots which took place following the knife attack in Southport last summer illustrate the urgent need for greater platform accountability and legislative and regulatory clarity. Without enhanced transparency and robust enforcement of platform policies, similar incidents may occur. Addressing these challenges requires ongoing collaboration to ensure that online spaces do not become incubators for violence and social unrest and to mitigate the real-world harms of online disinformation.
Read the 2025 Full Fact Report in full
Read now