When Power Meets the Wellness Algorithm: Why We Mistake Virality for Truth

7 October 2025

By Dr Rachael Kent


Dr Rachael Kent is a leading researcher, author, podcast host, and consultant specialising in digital health and wellbeing. She is Senior Lecturer in Digital Economy and Society Education at King’s College London and founder of Dr Digital Health, advising organisations on technology’s impact on health. Her book The Digital Health Self (2023) explores wellness, self-tracking, and social media. Dr Kent is also the first female class representative in UK legal history, leading a £1.5 billion collective action against Apple on behalf of 19.6 million UK consumers. Kent v Apple went to trial earlier this year (2025) and is awaiting judgment.

Health misinformation today doesn’t come from a shifty snake oil salesman but from sources wrapped in confidence and intimacy: a president at a podium, a celebrity doctor on a podcast, or a wellness influencer who has been on our phones for years.

When powerful figures float dubious health claims, the damage is disproportionate. Their posts don’t travel like ordinary rumours. Engagement-driven algorithms, parasocial trust, and monetisation streams – from affiliate links to supplements and wellness courses – turbocharge that reach and longevity. What follows is not just a content problem; it is the predictable outcome of an infrastructure that has quietly become an unregulated public-health platform.

In my book, The Digital Health Self, I describe how Instagram, TikTok, YouTube, and now across newsletters, podcast networks, and retail integrations, health and wellness guidance is no longer peripheral. It is a core commercial genre.

My recent Anthropology & Medicine article characterises Instagram as an unregulated public-health platform because its search, recommendation and glossy visual vernacular push people toward repeatable templates: morning routines, supplement stacks, “natural immunity” practices, nutritional scripts, and before and after reels.

A 2011 paper by sociologists Ward and Voas coined the term ‘conspirituality’: the fusion of conspiracy logics with wellness and spiritual narratives. It offers an explanatory frame –“they’re hiding the truth,” “detox your body,” “trust your intuition”–that casts institutions as corrupt and elevates “natural” choices as moral identity.

These are not neutral aesthetics. They are sales architectures that turn private uncertainty into public performance, and performance into purchase for people seeking everyday support.

In this economy, misinformation persists not merely because it is wrong, but because it is profitable, identity forming, and on brand.

Behavioural economics helps explain why false claims often beat evidence-based facts. Vivid stories stick in our minds more than statistics. We fear losses and prefer “playing it safe,” so doing nothing feels wise. When we see significant view counts and “this worked for me” comments, it feels like everyone agrees, which turns it into some form of social proof. Platforms are calibrated to these shortcuts. In an economy where engagement is the proxy for value, alarm and novelty are features, not problematic health harms. That is why the same myths recur; not because people are duped, but because the system rewards the feelings those myths produce.

Influencers are not peripheral to this process; they are integral to its distribution and business model. Across my research, I’ve seen how creator economies convert attention into income, including affiliate codes, brand deals, paid communities, supplement lines, and presenting advice as a form of intimacy. There has been a stark rise of AI ‘doctor’ deepfakes - videos that clone faces and voices, wrap them in scrubs, stethoscopes and studio lighting, and deliver confident, clinic-style advice.

In a fast scroll, this synthetic authority beats verification: the semiotics of trust are faked, so speculation lands as guidance before it can be factually checked. Influencers then ‘translate’ these false claims into everyday routines, saying, ‘Here's my safer alternative,’ often with affiliate links attached. Parasocial intimacy meets machine-made authority, lowering scrutiny and raising compliance. By the time a public health rebuttal arrives, the claim has already been domesticated into a personal habit by many.

Calling these systems unregulated public-health platforms names where responsibility resides. The tech oligarchs have built distribution networks that shape health beliefs and behaviour at scale without the duties we expect of public-facing health infrastructure. Suppose a UK broadcaster aired unproven claims that “detoxes cure cancer” in primetime, Ofcom’s Broadcasting Code would trigger an investigation, on-air corrections and potential sanctions. If a pharmacy sold supplements under a fabricated doctor’s endorsement, trading standards would act. Yet the feeds, whose recommendation engines can out-distribute any broadcaster, remain largely governed by voluntary policies and post-hoc moderation.

The harms to public health are measurable in delays, expenditures, and appointments filled with algorithmically induced anxiety. Each resurgence of an old myth produces behavioural spillover: some people skip vaccination, avoid safe first-line medicines, or spend money they cannot spare on unproven alternatives. In the UK, the death of 23-year-old Paloma Shemirani, who rejected chemotherapy for a highly treatable lymphoma amid a family environment saturated with conspiratorial anti-medicine claims, has become a stark touchstone for what happens when online narratives outrun evidence: a preventable loss that her brothers now link directly to the ecosystem that legitimised those views and helped them spread. Inequalities deepen as communities already facing barriers to care absorb more of the anxiety and cost. Clinicians triage misinformation in ten-minute slots; public-health teams spend scarce capacity re-explaining settled evidence in formats the feed will tolerate. All of this is the predictable output of a system that optimises for feeling informed over being informed.

This is a pivotal moment in public health in the UK. The Online Safety Act has created a lever to operationalise safety-by-design for health harms, and Ofcom’s focus on deepfakes and misinformation recognises how easily synthetic authority exploits platform affordances. This is not about collapsing debate; it is about attaching obligations to infrastructures that already operate as public-health distributors. Platforms know how to slow content when they choose to, especially for high-reach accounts, and treat recurrences of previously debunked claims as recidivist behaviour that loses algorithmic privilege and monetisation. Regulators can and should penalise high-authority false health claims and the platforms that enable them.

We should expose the incentives, not just the falsehoods. If health advice comes with a discount code, it’s advertising. Naming it shifts the frame from “friend” to “seller,” creating space for scepticism without shaming audiences. I propose a simple rule of thumb for online health claims: Pause–Check–Protect. Pause before you share, check the source and motivation (qualifications versus advertising), and protect yourself by verifying the claim with trusted institutions (NHS, WHO).

The point is not to wish for a slower internet. It is essential that, if digital platforms are to function as public health infrastructure, they must carry public health responsibilities. In the credibility arena, power and platform reach meet algorithms tuned to emotion; ‘conspirituality’ wraps misinformation in meaning; influencers convert that meaning into revenue. Ordinary people are left to make consequential health decisions in a feed that mistakes engagement for evidence. We can decide to reward different things. We can build an environment where accurate guidance travels faster than panic, where authority signals are tied to accountability, and where commercial success does not depend on repeat falsehoods dressed as care. If these systems are where the public manages its health, then the public interest, not private gain, must set the terms.

Related topics

#FactsMatter Health Social media

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.