How media reporting of modelling has shaped our understanding of the pandemic
Over the course of the past 18 months, it’s overwhelmingly likely that you’ve seen a scientific model about Covid-19 reported in traditional media like newspapers and television, or on social media—and they’re informing policies like lockdown that have dramatically altered our lives.
But what exactly are models? And how has the reporting of these incredibly complicated calculations affected the way we understand the pandemic?
We spoke to some of the nation’s leading experts to hear what they had to say.
Honesty in public debate matters
You can help us take action – and get our regular free email
So, what is a model?
In the context of Covid-19, a scientific model is the product of a calculation that allows experts to understand what could happen in different scenarios—for example, if the UK were to stay in lockdown or a certain percentage of the population gets Covid.
They are built on assumptions about things we already know. It means that as knowledge changes, such as in the case of a rapidly evolving pandemic, the assumptions the model is built on change and so do the modelled scenarios.
Throughout the pandemic in the UK, the government has published many of the models used to inform its response on its website, via the Scientific Pandemic Influenza Group on Modelling (SPI-M), which provides information to the Scientific Advisory Group for Emergencies (SAGE).
As Dr Thomas House, a reader in mathematical statistics at the University of Manchester, explained, there are three different kinds of modelling outputs.
-
R number
- The first is the reproduction number, commonly referred to simply as the R number, which tells us the average number of secondary infections produced by a single infected person. It’s a lagging number, around two to three weeks out of date when it is reported by the government.
-
Projections
- The second is a projection, which Dr House said attempts to answer the questions: “If things carry on as they are, what will they look like in a week or two? What will hospitalisations, deaths and cases look like in a few weeks?”
-
Scenarios
- The third type of output is a scenario which Dr House explained as saying ‘“Well, let's think about what could happen over the whole of the next year’.”
“The point is that none of those is really a prediction,” he explained. “But every single one is described as a prediction in the media and then you get this enormous media circus where ‘all the models are terrible, because look at what they predicted all this time ago and that didn't happen’.
“But the point is some things you can't predict, because the whole thing is shaped by policies that we can't incorporate in the model because we don't know exactly what the politicians are going to decide.”
But the confusion doesn’t lie entirely with the “media circus.”
SPI-M is clear that published models which assume no changes to policy or behaviour are “neither forecasts nor predictions.”
That said, estimates assuming the government adhered to its roadmap and eased restrictions on 19 July (which did actually happen) are described as having a "prediction interval". However, in this context, "prediction" has its own technical meaning which differs from its usual definition—which could potentially cause confusion for readers.
A "prediction interval" actually relates to the uncertainty around the scenario, and can exist even though the forecast is not a "prediction".
A model isn’t a weather forecast
Faced with an overwhelming amount of new information, not to mention the very real anxieties generated by the reality of a global pandemic, it’s not a surprise that many of us were looking for answers.
Death tolls, infection rates and vaccination benefits all became the subject of intense scrutiny and speculation. As a nation we wanted to know what was coming, and scientists did too—building models that ran millions of lines of code to deliver complicated and often extremely broad projections.
As the outputs of these models became public we started to see headlines that relayed them in extremely precise terms. “DEADLY BUG Coronavirus lockdown could last 18 MONTHS as government says 260,000 would die without drastic measures,” read one early Sun headline, published in March 2020. “Britain's death toll 'could hit 85,000 in second Covid wave,” claimed the Telegraph months later in October.
These headlines are examples of presenting very specific outputs from models as a type of forecast, in a similar way to the presentation of weather forecasts.
Graham Medley, professor of infectious disease modelling at the London School of Hygiene & Tropical Medicine (LSHTM) and head of SPI-M, said: “Modelling of infectious disease epidemics is a relatively new science, it's only really been around for about 50 years. It's moving towards the kind of engineering-type modelling so when you cross a bridge, or you get on an aeroplane, then an engineer somewhere has done some modelling to show that the wings won't fall off.
“That is what we would like infectious disease modelling to be at or to be able to do, but we are not there yet. Modelling is a really useful and increasingly essential tool but it has some uncertainty, and the inherent uncertainty is that we can't predict what people will do.
He continued: “Forecasting weather is easier than forecasting disease, because the weather doesn't rely on what people do. It happens anyway. We aren't relying on the fact that the amount of rain depends on how many people take umbrellas with them.”
But while the approaches to a weather model and a pandemic model are different, they do share limitations—principally the fact that can’t give us a very good idea of what could happen in the long term. While this might not matter so much for the day-to-day weather forecast, it's a major complication when making policy decisions around a public health emergency.
Professor Medley added: “We are quite good at being able to say what's going to happen in the next few weeks, but less good about what's going to happen in the next few months, and very poor at being able to say what's going to happen after that.
“Of course, to some extent, the decision making falls into that area of uncertainty.”
Models can be wrong
The government was heavily criticised in November 2020 over its presentation of modelling in a Downing Street press conference announcing the UK’s second national lockdown.
One criticised element was a slide shown outlining hospitalisations from Covid-19 with a medium term projection for the following six weeks. The upper limit for this was later revised downwards, from 9,000 to 6,000 each day—although the central estimate remained unchanged. The government’s public record of the slides used during the press conference has since been updated to note the error.
In the wake of the press conference, the Office for Statistics Regulation (OSR) released a public statement expressing disappointment that “good practice and a commitment to transparency” were “not yet universal” when publicly sharing data.
The OSR said: “There are many models across government which are used primarily for operational purposes. In cases where outputs from these models are quoted publicly it is our expectation that the associated information, including key assumptions, is also made available in an accessible and clear way.
“In the press conference on 31 October this was not the case. The Prime Minister referred to the reasonable worst-case scenario – a model set up to support operational planning. However, the data and assumptions for this model had not been shared transparently.”
Individual modellers have also faced scrutiny for the comments they have made publicly. In July, ahead of the final stage four unlocking of England’s “roadmap” out of lockdown, Professor Neil Ferguson said it was “almost inevitable” that the country would see 100,000 cases of Covid-19 a day without restrictions.
Less than a month later, the Times reported, Professor Ferguson admitted his projection was “off”, adding that the Euro 2020 matches had led to an “artificially inflated level of contact during that period” which had then rapidly dropped off.
Worst case scenarios
Headlines are designed to capture our attention. To do so, they often feature the most shocking piece of information.
Modelling is particularly vulnerable to this because the outputs presented by experts are a range, and when faced with the constraints of column space and social media headlines, the upper, or lower, limit of that range is likely to draw more attention.
Fiona Fox, chief executive of the Science Media Centre which has run more than 180 press briefings during the pandemic, told Full Fact: “In terms of modelling the big challenge, just very straightforwardly, is ‘don’t only report the big number’.
“When you dig into the articles the coverage can actually be quite good. But it’s your sub-editor, who often comes in after the science journalist has left, who reads the range between the smaller number and the bigger number, and they pick out the bigger number.
“That’s then exacerbated in the time of social media, where what happens is what gets shared is not the full article but the headline.”
We’ve seen this repeatedly throughout the pandemic.
An example of this would be the numerous headlines published in July 2020 claiming “coronavirus lockdown may cause 200,000 extra deaths”, for example in the Mail Online, Mirror and Telegraph. The articles noted that up to 25,000 people could die from delays to treatment in the first six months of the virus, and a further 185,000 in the medium to long term.
However, the report itself includes 25,000 deaths in six months as its upper limit, with a lower limit of 12,000. None of the articles we’ve linked to reference this beyond indicating that “up to” 25,000 could die.
All three of the articles also state that there could be as many as 12,000 avoidable deaths per year due to the UK being in recession. Just one, the Telegraph, noted explicitly that this was actually the upper limit of a range between 600 and 12,000 avoidable deaths.
Speaking in a personal capacity, Professor Mark Woolhouse, a professor of infectious disease epidemiology at the University of Edinburgh and SPI-M member, said: “I don't think anyone's ever managed to get the attention away from the the reasonable worst case scenario to the more central, most likely, scenarios that you're actually representing.
“The media invariably jump on the worst case. I can't recall a single example where they didn't. The reasonable worst case is there for a reason, but it's not the most likely and I don't think we've cracked how exactly you should interpret a reasonable worst case.”
Data is always ahead of the modelling
As we’ve already discussed, models are based on assumptions. Scientists can only make those assumptions based on what they know, and when we face an issue as complex and rapidly evolving as Covid-19, that knowledge—and those assumptions—change all the time.
The pace at which new information about the virus is published could make models look dated despite the fact they used the most recent information available at the time.
An example of this specific situation would be a Telegraph article from 12 June headlined ‘Covid modelling that pushed back June 21 was based on out-of-date data’.
Pointing to this example, Professor Medley said: “The modelling to support the decision [to delay lockdown easing], announced on Monday [14 June], was given to the government the previous week. But on Monday, as the decision was announced, the latest estimates of vaccine efficacy came out, so they weren't included in our modelling because we didn't know about them.
“That's always the case, the data is always slightly ahead of the modelling. What that has inspired is a number of reports in the press ... and some comments by MPs, saying ‘oh look the modelling is wrong because they didn't use the latest vaccine efficacy data’.
“However, when you look at the numbers that we included—because we always include variability in terms of the parameters and vaccine efficacy is one of the parameters—when you look to see what we actually did with the modelling, then the current data from Public Health England (PHE), are captured within that range.”
Are modellers just doom-mongers?
As the pandemic wore on, scientific experts informing the government’s decisions were more frequently described as “doom-mongers” with “gloomy” models.
The inherent uncertainty in modelling—especially in the incredibly complex and rapidly evolving context of a global pandemic—leaves the estimates particularly vulnerable to criticism, along with the experts who developed them.
Because scientific modelling has played such a large part in informing the government’s decision making throughout the pandemic, it means that modellers have sometimes been perceived as puppet-masters, seen to be pushing through policies that are unpopular among some parts of society such as lockdowns.
Professor Medley said: “The situation is very political and you can always criticise the models. That's what people who don't like the policy do—they point to the models and say they're all useless or wrong.”
He added: “The decision makers are having to make decisions with the evidence, but the thing to point out is that modelling is only part of the evidence. None of the modellers would wish for decisions to be made entirely on the basis of a model, because of that uncertainty.”
Mark Jit, professor of vaccine epidemiology at LSHTM, told Full Fact: “Modellers also get sick with Covid. Modellers also go to work and send their children to school and go to the shops. They also get affected if there's a lockdown or schools are closed.
“I'd like to think that modellers aren't like a pressure group who are trying to persuade the government to do one thing or another. We don't really have a vested interest in any particular decision, except that the best decisions are made for society.
“I think it's not really the modeller's place even to advocate a certain policy. Often when I'm interviewed and asked, ‘well, do you support the government's doing this or that?’, really it's not actually my place as a modeller.
“I think it would be great if actually, the media focused on what the models predict, rather than ‘modeller X thinks we should do this or that’.”
Not all bad news
Ms Fox was keen to emphasise that, in her view, modelling had been covered well throughout the pandemic, describing the work of specialist health and science journalists as “exemplary”.
“I'm not in despair about this at all,” she said. “I think there's been very, very good coverage of modelling and of the whole pandemic. I think there's a level at which journalists and the public now understand modelling in a way they never have.”
However, Ms Fox explained, the nature of scientific modelling—specifically the huge range of numbers it can provide—means it can be particularly vulnerable to politicisation by outlets with a particular editorial viewpoint on issues such as lockdown.
She said: “You've got your newspapers who will go to the smallest number possible and say ‘why the hell have we just postponed ‘Freedom Day’ by a month, when looking at all these figures, they don't show a problem’. And then you've got your other papers who want further lockdown who do the exact opposite.”
Professor Jit told Full Fact: “It's probably unfair to sort of group the media as one homogenous group, I do think there's been some really good reporting of the signs around Covid.
“There's also been some, I would say, really biased reporting that's not represented the science well. Sometimes even within the same sort of media or within the same newspaper or TV channel, from different journalists, there's been examples of really good and really bad reporting.”
It can be misleading for journalists to criticise models simply because their projected scenarios failed to materialise—especially when the models themselves may have prompted a change in policy.
Criticism, for example, that the models were wrong because the 50,000 cases a day projected by some modellers in October 2020 did not materialise is nonsensical, because it ignores the fact that policies such as pub curfews and work from home directives were motivated in large part by the modelling. And it's crucial that lessons are learned from the past 18 months of reporting on models—that projections are not mistaken for 'predictions', that a distinction is drawn between worst case and more likely central scenarios, and so on.
But at the same time, modellers have held a hugely influential position throughout this pandemic—whether they wanted that position or not.
When models have played such a significant role in decision-making, and when there have been legitimate concerns about their limitations, it's natural that they will be heavily scrutinised - and the media is of course a crucial avenue through which questions and concerns can be raised.
Correction 20 August 2021
Correction: we corrected this article to note a ‘prediction interval’ is a technical term which does not intend to mean the model is a ‘prediction’.