How not to report opinion polls: A guide from YouGov's Anthony Wells

4 July 2012

This article was written by YouGov's Anthony Wells and originally published on the UK Polling Report site. He has kindly allowed us to feature it here.

Honesty in public debate matters

You can help us take action – and get our regular free email

1) Don't report Voodoo polls

For a poll to be useful it needs to be representative. 1000 people represent only themselves, we can only assume their views represent the whole of Britain if the poll is sampled and weighted in a way that reflects the whole of Britain (or whatever other country you are polling). At a crude level, the poll needs to have the right proportions of people in terms of gender, age, social class, region and so on.

Legitimate polls are conducted in two main ways. Random sampling and quota sampling (where the pollster designs a sample and then recruits respondents to fill it, getting the correct number of Northern working class women, Midlands pensioners, etc, etc). In practice true random sampling is impossible, so most pollsters methods are a bit of a mixture of these two methods.

Open-access polls (pejoratively called "voodoo polls") are sometimes mistakenly reported as proper polls. These are the sort of instant polls displayed on newspaper websites or through pushing the red button on digital tv, where anyone who wishes to can take part. There are no sampling or weighting controls so a voodoo poll may, for example, have a sample that is far too affluent, or educated, or interested in politics. If the polls was conducted on a campaign website, or a website that appeals to people of a particular viewpoint it will be skewed attitudinally too.

More importantly there are no controls on who takes part, so people with strong views on the issue are more likely to participate, and partisan campaigns or supporters on Twitter can deliberately direct people towards the poll to skew the results. Polls that do not sample or weight to get a proper sample or that are open-access and allow anyone to take part should never be reported as representing public opinion.

Few people would mistake "instant polls" on newspaper websites for properly conducted polls, but there are many instances of open access surveys on specialist websites or publications (e.g. Mumsnet, PinkNews, etc) being reported as if they were properly represenative polls of mothers, LGBT people, etc, rather than non-representative open-access polls.

Case study: The Observer reporting an open-access poll from the website of a campaign against the government's NHS reforms as if it was representative of the views of members of the Royal College of Physicians, the Express miraculously finding that 99% of people who bothered to ring up an Express voting line wanted to leave Europe, The Independent reporting an open-access poll of Netmums in 2010.

2) Remember polls have a margin of error

Most polling companies quote a margin of error of around about plus or minus 3 points. Technically this is based on a pure random sample of 1000 and doesn't account for other factors like design and degree of weighting, but it is generally a good rule of thumb. What it means is that 19 times out of 20 the figure in a poll will be within 3 percentage points of what the "true" figure would be if you'd surveyed the entire population.

What it means when reporting polls is that a change of a few percentage points doesn't necessarily mean anything — it could very well just be down to normal sample variation within the margin of error. A poll showing Labour up 2 points, or the Conservatives down 2 points does not by itself indicate any change in public opinion.

Unless there has been some sort of seismic political event, the vast majority of voting intention polls do not show changes outside the margin of error. This means that, taken alone, they are singularly unnewsworthy. The correct way to look at voting intention polls is, therefore, to look at the broad range of ALL the opinion polls and whether there are consistent trends. Another way is to take averages over time to even out the volatility.

One poll showing the Conservatives up 2 points is meaningless. If four or five polls are all showing the Conservatives up by 2 points, then it is likely that there is a genuine increase in their support.

Case study: There are almost too many to mention, but I will pick up up the Guardian's reporting theirJanuary 2012 ICM poll, which describes the Conservatives as "soaring" in the polls after rising three points. Newspapers do this all the times of course, and Tom Clark normally does a good job writing up ICM polls… I'm afraid I'm picking this one out because of the hubris the Guardian displayed in their editorial the same day when they wrote "this is not a rogue result. Rogue polls are very rare. Most polls currently put the Tories ahead. A weekend YouGov poll produced a very similar result to today's ICM, with another five-point Tory lead. So the polls are broadly right. And today's poll is right. Better get used to it."

It was sound advice not to hand-wave away polls that bring unwelcome news, but unfortunately in this case the poll probably was an outlier! Those two polls showing a five point lead were the only ones in the whole of January to show such big Tory leads, the rest of the month's polls showed the parties basically neck-and-neck — as did ICM's December poll before, and their February poll afterwards. Naturally the Guardian didn't write up the February poll as "reversion to mean after wacky sample last month", but as Conservative support shrinks as voters turn against NHS bill. The bigger picture was that party support was pretty much steady throughout January 2012 and February 2012, with a slight drift away from the Tories as the European veto effect faded. The rollercoaster ride of public opinion that the Guardian's reporting of ICM implied never happened.

3) Beware cross breaks and small sample sizes

A poll of 1000 people has a margin of error of about plus or minus three points. However, smaller sample sizes have bigger margins of error. Where this is most important to note is in cross-breaks. A poll of 1000 people in Great Britain as a whole might have fewer than 100 people aged under 25 or living in Scotland. A crossbreak made up of only 100 people has a margin of error of plus or minus ten percent. Crossbreaks of under 100 people should be given extreme caution, under 50 they should be ignored.

An additional factor is that polls are weighted so that they are representative overall. It does not necessarily follow that cross-breaks will be internally representative. For example, a poll could have the correct number of Labour supporters overall, but have too many in London and too few in Scotland.

You should be very cautious about reading too much into small crossbreaks. Even if two crossbreaks appear to show a large contrast between two social groups, if they are within each others margin of error this may be pure sample variation.

Pay particular caution to national polls that claim to say something about the views of ethnic or religious minorities. In a standard GB poll the number of ethnic minority respondents are too small to provide any meaningful findings. It is possible that they have deliberately oversampled these groups to get meaningful findings, but there have been several instances where news articles have been based on the extremely small religious or ethnic subsamples in normal polls.

Extreme caution should be given to crossbreaks on voting intention. With voting intention small differences of a few percentage points take on great significance, so figures based on small sample sizes, that are not internally weighted, are virtually useless. Voting intention crossbreaks may reveal interesting trends over time, but in a single poll are best ignored.

Case study: Again, this is a common failing, but the most extreme examples are reports taking figures for religious minorities. Take, for example, this report of an ICM poll for the BBC in 2005 — the report says that Jews are the least likely to attend religious services, and that 31% of Jews said they knew nothing about their faith. These figures were based on a sample of FIVE Jewish respondents. Here is the Telegraph making a similar error in 2009 claiming that "79 per cent of Muslims say Christianity should have strong role in Britain", based on a subsample of just 21 Muslims.

4) Don't cherry pick

In my past post on "Too Frequently Asked Questions" one of the common misconceptions I cite about polls is that pollsters only give the answers that clients want. This is generally not the case — published polling is only a tiny minority of what a polling companies produces, the shop window as it were, and major clients that actually pay the bills want accuracy, not sycophancy.

A much greater problem is people reading the results seeing only the answers they want, and the media reporting only the answers they want (on the latter, this is more a problem with pick-up of polls from other media sources, papers who actually commission a poll will normally report it all). Political opinion polls are a wonderful tool, interpreted properly they allow you to peep into what the electorate see, think and what drives their voting intention. As a pollster it's depressing to see them interpreted by chucking out and dismissing anything that undermines their prejudices, while trumpeting and waving anything they agree with. It sometimes feels like you've invented the iPad, and people insist on using it as a doorstop.

It should almost go without saying, but you should always look at poll findings in the round. Public opinion is complicated and contradictory. For example, people don't think prison is very effective at reforming criminals, but tend to be strongly opposed to replacing prison sentences with alternative punishments. People tend to support tax cuts if asked, but also oppose the spending cuts they would require. Taking a single poll finding out of context is bad practice, picking poll findings that bolster your argument while ignoring those that might undermine it is downright misleading.

Case study: Almost all of the internet! For a good example of highly selective and partial reporting of opinion polls on a subject in the mainstream press though, take the Telegraph's coverage of polling on gay marriage. As we have looked at here before, most polling shows the public generally positive towards gay marriage if actually asked about it — polls by ICM, Populus, YouGov and (last year) ComRes have all found pretty positive opinions. The exception to this is ComRes polling for organisations opposed to gay marriage which asked a question about "redefining marriage" that didn't actually mention gay marriage at all, and which has been presented by the campaign against gay marriage as showing 70% people are opposed to it.

Leaving aside the merits of the particular questions, the Telegraph stable has dutifully reported all the polling commissioned by organisations campaigning against gay marriage — hereherehere and here. As far as I can tell they have never mentioned any of the polling from Populus or YouGov showing support for gay marriage. The ICM polling was actually commissioned by the Sunday Telegraph, so they could hardly avoid mentioning it, but their report heavily downplayed the finding that people supported gay marriage by 45% to 36% (or as the Telegraph put it "opinion was finely balanced" which stretched the definition of balanced somewhat) instead running heavily on a question on whether it should be a priority or not. Anyone relying on the Telegraph for its news will have a very skewed view of what polling says about gay marriage.

5) Don't make the outlier the story

If 19 times out of 20 a poll is within 3 points of the "true" picture, that means 1 time of out 20 it isn't — it is what we call a "rogue poll". This is not a dispersion or criticism of the pollster, it is an inevitable and unavoidable part of polling. Sometimes random chance will produce a whacky result. This goes double for cross-breaks, which have a large margin of error to begin with. In the headline figures 1 in 20 polls will be off by more than 3 points; in a crossbreak of 100 people 1 in 20 of those crossbreaks will be off by more than 10 points!

There are around 30 voting intention polls conducted each month, and each of them will often have 15-20 crossbreaks on them too. It is inevitable that random sample error will spit out some weird rogue results within all that data. These will appear eye-catching, astounding and newsworthy… but they are almost certainly not. They are just random statistical noise.

Always be cautious about any poll showing a sharp change in movement. If a poll is completely atypical of other data, then assume it is a rogue unless other polling data backs it up. Remember Twyman's Law: "any piece of data or evidence that looks interesting or unusual is probably wrong".

Case study: Here's the Guardian in February 2012 claiming that the latest YouGov polling showed that the Conservatives had pulled off an amazing turnaround and won back the female vote, based on picking out one day's polling that showed a six point Tory lead amongst women. Other YouGov polls that week showed Labour leading by 3 to 5 points amongst women, and that that day's data was an obvious outlier. See also PoliticalScrapbook's strange obsession with cherry-picking poor Lib Dem scores in small crossbreaks. 

6) Only compare apples with apples

All sorts of things can make a difference to the results a poll finds. Online and telephone polls will sometimes find different results due to things like interviewer effect (people may be more willing to admit socially embarrassing views to a computer screen than an interviewer), the way a question is asked may make a difference, or the exact wording used, or even the question order.

For this reason if you are looking for change over time, you need to compare apples to apples. You should only compare a question asked now to a question asked using the same methods and using the same wordings, otherwise any apparent change could actually be down to wording or methodolgy, rather than reflect a genuine change in public opinion.

You should never draw changes from voting intention figures from one company's polls to another. There are specific house effects from different companies methodologies which render this meaningless. For example, ICM normally show the Lib Dems a couple of points higher than other companies and YouGov normal show them a point or so lower… so it would be wrong to compare a new ICM poll with a YouGov poll from the previous week and conclude that the Lib Dems had gained support.

Full Fact fights bad information

Bad information ruins lives. It promotes hate, damages people’s health, and hurts democracy. You deserve better.