This blog breaks one of my rules. I try not to criticise YouGov’s rivals. Like us, they do their best to provide reliable data. Like us, they are prone to sampling fluctuations: sadly, none of us have managed to repeal the laws of probability. However, ICM’s latest poll for the Guardian is so striking that it deserves attention. Every poll by every company since the spring of last year has shown Labour in the lead. Suddenly ICM says that lead has disappeared. It now puts Labour and the Conservatives neck-and-neck, with 36% each. Can this be true?
Want to receive Peter Kellner's commentaries by email?
It certainly contrasts with YouGov’s latest poll for the Sun, conducted at virtually the same time. We have Labour 9% ahead.
So what is going on? If we compare YouGov and ICM on a like-with-like basis, the differences are not that great. We have Labour on 40% and the Conservatives on 31%. ICM’s ‘raw’ numbers are Labour 38%, Conservatives 33%.
Had ICM reported a five point lead, its poll would have passed virtually unnoticed. But, as usual, it made two adjustments. First, it counted only those whom they think will actually vote. Its Labour respondents were less likely than its Tory respondents to say they will actually vote. By adjusting for this, ICM reduced Labour’s lead from five points to two: 37-35%.
ICM then made a further adjustment. It looked at those respondents who said “don’t know”. Using information about how these people say they voted last time, ICM reckoned that this group contained more people who would, in practice, vote Conservative than Labour. This adjustment brought the two parties level, at 36% each.
Both adjustments – for turnout and for ‘shy Tories’ – are perfectly defensible. I don’t believe that there is a single right or wrong way to conduct polls. All of us make judgement calls about our methods. Indeed, all of us in the political number-crunching business owe ICM a debt for the way that it pioneered new techniques for measuring voting intention, following the debacle of 1992, when every pollster seriously overstated Labour’s support and understated the Tories, and so failed to foresee John Major’s victory in that year’s election. And ICM has a creditable record in its final polls in subsequent general elections.
That said, ICM has also produced some erratic figures. A week before the 1997 election, it showed Labour’s lead slumping suddenly from 14 points to just five. No other pollster detected this remarkable shift; and Tony Blair’s eventual margin of victory was 13 points. Four years later, again with one week to go, ICM alone detected an equally sudden move in the opposite direction. It put Labour 19 points ahead, double its lead on election day seven days later.
I suspect that ICM’s latest poll is another aberration. Here is why. ICM questioned 1,003 people for its latest poll. Conventional statistical theory tells us that the margin of error is 3%. That is, nineteen times out of 20, the ‘true’ figure for the main parties should be within 3 points of the polling number.
However, ICM’s voting figures are not based on the full sample. Its full tables show that it elicited voting intention from just 577 respondents. This lifts the margin of error to more than 4%. When ICM filters out those it thinks would not actually vote, its sample falls to just 444. The margin of error on this is around 5%. And that applies to the figure for each party. The figure that attracts the most attention – the gap between the two main parties – is subject to an even greater margin of error, of 7%. Erratic figures are not merely possible; given ICM’s sample sizes they are, from time to time, a racing certainty. In its latest poll, I believe that ICM’s raw numbers are slightly too Conservative, and its adjustments are both slightly too large. It is the cumulative impact of these separate, individually modest, factors that has ended up generating misleading headlines.
YouGov’s figures are also prone to be out of line. But we have two advantages. First, because we poll larger samples and don’t filter for turnout, our voting intention figures are based typically on around 1,200 respondents naming a party. This does not completely remove the risk of a ‘rogue’ poll, but it does make sudden violent sampling shifts less likely. Secondly, because we publish voting intention figures five times a week, it quickly becomes clear if one of our samples is wonky. Unless it conducts an extra poll, ICM will not publish further voting intention figures until mid-August, and therefore won’t be able to verify or correct its latest findings.
What, then, is the true position? My judgement is that Labour’s support has declined in recent months, from 43-44% last winter to 39-40% now. (The party seems to have dipped below this a fortnight ago, before recovering slightly.) The Conservatives are on 31-32% now, roughly where they were six months ago, having slipped back after May’s local elections. UKIP peaked at 15-16% in mid-May, and are now on 12-13%, fractionally ahead of the Liberal Democrats. ICM’s adjustments tend to reduce Labour’s lead, while its sample sizes are prone to generate more erratic movements – and, as now, headlines that are more dramatic but not necessarily more accurate.
To repeat: ICM deserves its high reputation. It is a well-run company that I respect enormously. It has every right to adjust its raw data as long as it explains its methods, which it does. However, I do have this question. If its ‘don’t knows’, ‘won’t votes’ and those thought unlikely to vote, add up to more than half its sample, and thereby leave it so vulnerable to improbable fluctuations, is ICM really wise to poll only 1,000 people each time?
Image: Getty