How to measure a debate

May 16, 2011, 3:25 AM GMT+0

There has been great interest not only in the election debates, but in the polling of who ‘won’ or ‘lost’. At first sight, it may seem obvious how pollsters should measure that, but on closer inspection, it isn’t obvious at all.

For voting intention, we have well-established methods that all the polling companies accept, about the nature of the sample and the actual question used. But we’ve actually had very little discussion about how we should all approach the new issue of debate polling. There are three issues that I hope we’ll have resolved by the time of the next election:

The ‘debate question’

What should pollsters actually be asking? We could ask which contender the respondent was most convinced by, or impressed with. Should we prompt for quality of performance or content (in debating contests, for example, one judges debates on a number of separate criteria). We could just ask in the most simple form, “Who do you think won?”

Many viewers will have firm loyalties to one party or another, and one might want to prompt them to open-mindedness by saying “Leaving aside your own previous loyalties…” Or maybe we shouldn’t do that. It’s unsettled.

The sample

For voting intention, that’s straightforward: you try to make the sample as much as possible representative of the whole electorate. We have established techniques for doing that. Pollsters vary among themselves about which aspects of known qualities of the electorate they weight to, but the general principle is not disputed as any citizen over the age of 18 has the right to vote, so the baseline is well understood.

But with the TV debates, it’s significantly different. Should the sample be the same? But the actual audience is not like the whole electorate; for example, it’s strongly skewed to the most engaged, which introduces a demographic bias. So instead, should we sample and weight to the general population, trying to answer “How would the nation have judged it, if the entire nation had all been watching”? But that’s impossible, you can’t technically do that as you don’t know how the less-inclined-to-watch would have responded to the same material had they been engaged. You would be creating a false picture, it would have no correspondence to reality.

Or you could try to make your sample like a jury, using only ‘floating voters’, or a sample balanced equally between supporters of each contender. You could even do a ‘before and after’ to measure the effect of the debate. Whichever one you choose, you’ll get a different result. As I understand it (I may be wrong), ICM, like YouGov, weights to the audience, which at least has a clear logic, but I believe some others use a different model.

Timing

Should you survey your sample as quickly as possible after the debate? Or wait a little? You might want to let the effect ‘sink in’, let it be deliberated a little. But one of the values of debate polling is to remove the effect of post-debate spin, with all the party professionals, the activists and pundits jumping up to convince us that their man ‘won’.

Pollsters pretty much agree we should be quick, but different methodologies determine just how quick. At YouGov, we have a method that greatly reduces outside interference, hardly allowing even the news channel’s presenter to sum up. We assemble our audience panel in advance, and tell them to go to their computers the moment the debate finishes, to find their email, click through to the survey and place their vote. No tea-making first, no discussion with friends on the phone, not even a moment of post-debate analysis on the TV screen.

Trouble is, a handful may do it moments before the last word is spoken. We don’t want to make the survey live before the end, but we also don’t want the quickest respondents to sit there waiting, clicking on dead links. We try to time it just right. For the second debate, we had a few respondents clicking-through before the end of the summation, which upset some Nick Clegg supporters, because he spoke last, and spoke well. As it happens, those respondents turned out to be very slightly more pro-Clegg than the later respondents, so clearly it made no difference to the result. This time we’ll cut it finer, but for all those counting (as it were) the angels on the head of a pin, be assured, it will make no difference.

Of course, it’s not polls which decide who won or lost. They provide one kind of evidence, but any sample contains many pre-decideds who will affect any result. The only person who really decides the winner of the debate is you.