The pollsters’ experimental election

Anthony WellsHead of European Political and Social Research
June 01, 2017, 12:43 PM GMT+0

Why are polls showing such a wide range in the election campaign?

The election so far has been a volatile one. If you’d drawn a line chart of polls in the 2015 election campaign you’d have been looking at virtually parallel lines (though we will never know if that was because people’s views really were static, or faulty polls missed genuine movement).

In 2017 there has been no lack of movement – all the polling companies, whatever their methods, have shown a huge Tory lead in April getting ever narrower as the campaign has progressed. The difference between companies is on how tight the race has become.

From the pollsters’ point of view this is an experimental election. We all got it wrong in 2015 and we are all trying different methods to get it right this year. As I wrote at the start of the campaign, inevitably, some of those methods will be wrong and some of those methods (we hope) will end up being correct. Obviously, we’d all like the methods we’ve adopted to be successful – but we won’t know if they are until June 9th.

There are lots of differences between how different polling companies are doing their sums. Some poll online, some by phone. Some weight by different things like education, political interest, newspaper readership. Some do different things with people who say don’t know.

For once, however, the difference in the polls in this election is easy to understand – it is almost wholly to do with how pollsters treat turnout.

The reason the polls got the 2015 election wrong was down to sampling, particularly among young people. The sort of young people who took part in polls were too engaged and too likely to vote, meaning polls ended up with too many young people voting. Polling companies have taken different approaches to solving this, but they broadly fall into two categories. Some have tried to improve their samples to reduce the number of people who are very interested in politics. Others have changed their turnout models so that they assume the same low level of turnout among young people as happened in 2015.

Generally speaking, the polls that continue to show a large Conservative lead are those who are basing their turnout models on the pattern of turnout in 2015. Those that show smaller leads are basing turnout on how likely people say they are to vote.

In the case of YouGov we have mostly concentrated on improving our sample – recruiting more people who are less interested in politics and weighting by political interest and education. However, we no longer take people’s self-reported likelihood to vote as being entirely reliable. As past voting behaviour is a useful guide to whether people will vote this time, we weight down people who didn’t vote in 2015.

Take our most recent poll. After we had weighted our sample, taken account of how likely people say they are to vote, and weighted down the answers of those people who didn’t vote last time, we were left with a sample that implies turnout of 51% among people under 25 and 75% among people aged 65+; a turnout gap of 24 points between young and old.

Looking at estimates from past elections from the House of Commons library, in 2015 the turnout gap between young and old was 35 points, in 2010 it was 23 points, in 2005 it was 36 points. In other words, we’re showing a smaller gap than in 2015, but similar to 2010 and not one that we think is totally unrealistic if Jeremy Corbyn has enthused younger people.

This is not an article where a pollster boasts about how his method is definitely right and other people are all doing it wrong. We may be correct and other companies not. Others may be right and we may be wrong. The reality is, no one truly knows.

My own expectation has been that the Conservatives will probably get a majority of around 70, so the degree of narrowing in the polls to this point has personally surprised me. That said, I also reckoned that Remain would win and Donald Trump would lose.

But the job of a pollster is not to make a guess and then make the figures match it. It is to try and come up with a method that accurately measures what the public think and say they will do, and then report the results. When the results don’t come back the way you expect, you can’t change the method to match what you think the result “should” be.

You also can’t bury the results. Back in 2015 one pollster came unstuck for saying they had sat upon one poll result because it had “looked wrong.” We are not about to do the same. Recent political history is littered with examples of the received wisdom being wrong.

As the polls currently stand we are headed for one of two election results. It’s possible that, come Election Day, all that young enthusiasm for Jeremy Corbyn will translate into real votes, leading to a close election with perhaps a small Tory majority or even a hung Parliament. In that case, our figures will end up about right and, assuming that turnout patterns will be the same in 2017 as they were in 2015 will have caused some other pollsters to miss the real story.

The alternative is that all those young Corbynistas will prove a mirage and that some polls still contain too many of the sort of young people who vote, with the end result being that the Conservatives win a large or landslide majority. In that case, it will suggest the methods we’ve tried to correct the problems of 2015 probably haven’t worked yet and we’ll will need to explore turnout models based on demographics or alternative solutions.

This article originally appeared on Research Live

Image from PA

Explore more data & articles