Last week MPs started their summer recess, their last chance for a break before the run-in to next year’s general election. Many of them are heading abroad; so, for this week, does this blog. Yesterday the New York Times published the findings of one of the largest media polls ever conducted in the United States. The CBS/New York Times/ YouGov Battleground Tracker surveyed more than 66,000 people living in the 34 states that have Senate races in this November’s mid-term elections. Our findings suggest that Barack Obama risks losing control of the Senate, in addition to the House of Representatives, which the Democrats lost in 2010.
Want to receive Peter Kellner's commentaries by email? Subscribe here
The bald facts are these. The Democrats currently have 53 Senators. Two Independent Senators caucus with the Democrats, bringing the party’s effective tally to 55. The Republicans have 45. Our survey suggests that the Republicans are on course to gain at least four seats, and possibly as many as eight. But with five races neck-and-neck, with candidates from the two parties within two points of each other, there is all to play for. This autumn’s campaign could widen the Republicans’ current narrow advantage – or enable the Democrats to retain their majority.
For those who wish to explore the results in more detail, the New York Times report is here.
Having now expanded on the headline above, I shall now take this blog in a different direction. Of interest to some readers may be the New York Times’s decision, in collaboration with the CBS television network, to commission YouGov to survey this year’s mid-terms. In common with other (though not all) US media organisations, it has until now steered clear of online research. Its rethink is the latest sign that in survey research, as in so much else, technology is prompting big changes in the ways that we, and the media, find things out.
In America, as some years ago in Britain, resistance to online research was rooted in faith in probability samples. The idea was that the best polls were ones in which every person had an equal chance of being surveyed. When face-to-face polls became prohibitively expensive, telephone polls took over. By the late 1980s, well over 90% of American homes had phones. Polls could obtain a good spread of respondents; and by adjusting the raw samples to take account of such demographics factors as age, gender and ethnicity, they were able to provide an accurate guide to the mood of the nation. Automated ‘robopolls’ by companies such as Rasmussen brought costs down further, as voters were called by computers and used their phone key pads to enter their responses to questions posed by recorded voices.
Three factors have dented the appeal of probability samples.
The first has been the decline of landline phones. More than one in three Americans now has no landline phone. True, only 2% have no phone of any kind; but telephone polls in their heyday were part of a culture of people sitting at home having extended phone conversations. And robopolls can call only landlines: federal legislation bars them from calling mobile phones. In contrast, access to the internet is high and rising: 81% of Americans are online. It has penetrated every demographic group.
The second factor has been the collapse in response rates for telephone polls. The theory of probability samples depends on decent response rates. After all, what matters is not the quality of the sample design but the quality of the achieved sample.
This is something that most telephone polling companies seldom talk about. Two years ago, Pew Research broke ranks. Pew is a not-for-profit research organisation. It has no shareholders to worry about. In the run-up to the 2012 election it published a remarkable report.
This showed that response rates had fallen by three-quarters in just 15 years. Only 9% of sampled households were responding to Pew’s telephone polls in 2012, down from 21% in 2006 and 36% in 1997. In some respects this did not matter too much: Pew still obtained the right proportion of registered voters, homeowners, and households with children. But its respondents were far more likely than the national average to engage in voluntary activity, contact a public official and talk regularly to their neighbours.
Of course, with smart weighting, it is possible to correct for all these things. The point is more fundamental. The case for probability samples is that they are intrinsically high-quality. Thanks to Pew, we know that this is no longer true. In which case, the basic case against online panels – that they are, by their nature, not probability samples – loses its power.
Today it is clear that there is NO sampling system that can be relied on to provide perfect raw samples. Survey researchers are in the business of obtaining the broadest, fairest range of respondents that they can, and then extrapolating from the people they can reach to those they can’t. This task requires care, skill and judgement. And in this less-than-perfect world, online research has the advantage that, for any given budget, it can reach more people and ask them more questions – and so go further in the quest for accuracy and understanding – than traditional surveys.
The third factor is YouGov’s panel. The very thing that probability enthusiasts dislike most about us – that we go back to the same people from time to time – has enabled us to demonstrate one of the defects of traditional telephone polls and, by so doing, explode one of the myths of modern American elections.
For decades, poll-watchers have been regaled with tales of convention and debate bounces. Presidential candidates almost invariably see their poll ratings rise directly after their own convention; and those who follow US elections closely may recall how badly Obama stumbled in the first television debate in 2012 – and saw his lead over Mitt Romney evaporate.
YouGov has polled the past three presidential elections and never found these things. Telephone polls implied that millions of Americans switched support, at least for a while; but these switchers have seldom shown up on YouGov’s panel.
In 2012, we joined forces with Microsoft to conduct a special piece of research which showed what was happening. X-Box users were invited to respond as often as they wanted to election surveys. This was wholly unlike our normal surveys. They were opt-in surveys that anyone could do; they were demographically biased towards younger men; they made no pretence at being representative. But we harvested vast amounts of data from more than 300,000 Americans, of whom more than 80,000 completed a number of surveys – enough to explore the impact of specific events in detail.
We looked at candidate support before and after that first Obama car-crash debate. When we examined the two samples as if they were separate groups of people, and weighted them demographically, in much the same way as telephone polls do, we got the same result: a marked shift from Obama to Romney.
But when we also weighted the data by how people had voted in 2008, and whether they described themselves as ‘liberal’, ‘moderate’ or ‘Conservative’, it became clear that Obama had lost very little support. Digging down into the responses of the people who did repeated surveys, we found that that few people shifted their vote. Instead, for a few days, Romney supporters were slightly more willing, and Obama supporters slightly less willing, to respond to election polls. As YouGov’s regular panel surveys had indicated, underlying sentiment barely shifted at all. Telephone polls were hit by a short-lived variation in response rates; they wrongly reported this technical phenomenon as a substantial shift in opinion.
At one level these are arcane, technical issues that will excite only political obsessives. But at another level, they matter to more than pollsters like me. We seek to find out what people think. We have a duty to do this to the best of our ability. If polling methods lead to misleading conclusions, our understanding of the public mood is likely to be distorted. (I discussed a different version of this issue in a blog four weeks ago on the Scottish referendum).
The trouble is, there are few absolutes. In an age of changing technology and declining telephone response rates, the need for transparency and good judgement is paramount. Alongside its report of our results, the New York Times has published a candid account of its decision to commission YouGov, and the continuing controversies over online research.
The debate is far from over. I doubt it ever will be. What is clear is that the simple certainties of probability sampling no longer apply. Perhaps the most important thing is that we, and clients such as the New York Times, are aware of the challenges we face.
Want to receive Peter Kellner's commentaries by email? Subscribe here
Image: Getty