“Women are more likely than men to view Boris Johnson as dishonest, xenophobic and politically calculating, according to a sample of more than 70,000 Guardian readers.”
The Guardian, 19 July 2019
On Friday, the Guardian’s front page trailed a story about how men and women viewed Conservative leadership favourite Boris Johnson.
The data comes from a quiz answered by over 70,000 Guardian readers.
As the article later says: “The results should not be understood as a scientific poll, since they are unlikely to represent, for example, enthusiastic Conservative voters.”
But including that one sentence caveat, several paragraphs in, isn’t really enough.
The headlines didn’t make the nature of the ‘quiz’ clear to a casual reader and the front page of the print edition (which also trailed the article) inaccurately described the results as being from a “survey”.
The implication of presenting it as news story at all is that it is tells us something meaningful. But the poll is not representative of the public, or even Guardian readers; it is only representative of the 70,000 people who actually answered the quiz.
It wasn’t representative of Guardian readers, let alone the public
To collect the data the Guardian put a quiz on its website asking people to rank Boris Johnson across different attributes.
The Guardian’s front-page headline (“Women show resistance to Johnson’s bumbling charms”), implies that the Guardian surveyed a representative sample of women. This is misleading.
The front-page text goes on to say the quiz sampled Guardian readers and throughout the main piece the Guardian suggests the results represent the views of “Guardian readers”. This is also misleading.
Good surveys ask a small group of people (the “sample”) what they think, and try to make sure that those people represent the wider population.
Pollsters do this in two ways. Firstly they try and survey a group of people that resemble the population. For example, they may try and survey men and women in equal proportion if they’re doing a survey about the general public’s views on a topic.
Secondly, to make up for the fact that it’s basically impossible to get a perfectly representative sample, pollsters will “weight” their sample—adjusting the raw numbers so that they match known characteristics of the national population, such as the ratio of men and women. Different pollsters do this in different ways: they might look at how people voted in previous elections or referendums, use measures of social class, or their level of education.
The Guardian didn’t survey a group of people representative of the general public, or even Guardian readers. They also didn’t adjust the results of the respondents through weighting to make the results representative of the public or Guardian readers.
So the results have no value beyond telling us what those 70,000 respondents thought. Any suggestion that they can tell us anything else is flawed.
We’ve written more about how to spot misleading polls here.
With Brexit fast approaching, reliable information is crucial.
If you’re here, you probably care about honesty. You’d like to see our politicians get their facts straight, back up what they say with evidence, and correct their mistakes. You know that reliable information matters.
There isn’t long to go until our scheduled departure from the EU and the House of Commons is divided. We need someone exactly like you to help us call out those who mislead the public—whatever their office, party, or stance on Brexit.
Will you take a stand for honesty in politics?