A summary of YouGov’s review into our 2024 general election performance

Anthony WellsHead of European Political and Social Research
December 18, 2024, 1:34 PM GMT+0

YouGov’s model was the most accurate in terms of seats and the second most accurate for vote share – but there are ways we could have been more accurate still

Since the general election we've paused our voting intention figures while we looked at how our polling performed and reviewed our methodology. We've now completed the review and will be starting our regular voting intention polling again in January.

At the start of the general election campaign in June this year we made substantial changes to our methods when we began using a similar MRP model to what which we use for our seat predictions for our regular voting intention. The main outcome of our review is that this method performed substantially better than our old approach and we'll be adopting it as a permanent change.

Our larger MRP model performed strongly at the election. It was the most accurate published MRP model in terms of seats, calling 92% of constituencies correctly. This was better than all the other MRP models, and marginally better than the famously accurate BBC/ITV/Sky exit poll. In terms of national vote shares our MRP model was second most accurate out of all the final polls and MRPs. Our average error on party shares was 1.4%. However, like almost all other polls we overestimated the level of Labour support. We are always keen to review our methods and improve how we do things, to see what we could have done to be even more accurate.

The review found:

  • There was evidence of a genuine late swing away from Labour and towards the Conservatives. Breaking down the data from our MRP into the day-by-day data showed a clear fall in Labour support and increase in Conservative support during the fieldwork, and recontacting respondents after the election showed that Labour support continued to fall after our polls had been published.
  • When it came to turnout, our samples contained too many people who went out to vote and not enough people who stayed at home. Our MRP models (our large MRP, and our regular voting intention MRP) mostly based their turnout projections on demographics, so were more accurate than our traditional approach would have been, but we still factored in respondents' self-reported likelihood to vote. If we had based turnout purely on demographic modelling, we would have been more accurate.
  • The balance of Conservative loyalists and defectors in our sample wasn't right. This may have been because the Brexit party's decision to stand down in many seats in 2019 meant that past vote sampling and weighting was less effective at ensuring samples had the correct balance between Conservative and Reform UK supporters.

We also think that we could have improved how we reflected ethnicity in our large MRP model and our voting intention MRPs. Ethnicity was already part of our model, but broke down respondents into being White, Asian, Black, Mixed or Other. Our review found that our MRP would likely have done better in some key seats if we had used more detailed categories and looked separately at voters of Indian ethnicity and Pakistani or Bangladeshi ethnicity.

We will be restarting our regular national voting intention figures using our approach we used at the election with two changes. Firstly, we will base our turnout modelling purely on demographics, rather than respondents' self-reported likelihood to vote. Secondly, we will include a more detailed ethnicity breakdown in our model.

While we don't think they would have made an impact on 2024 polling, our review also found other areas where we think we can improve in the future, including changing the way we ask about education to more closely reflect the 2022 census and reviewing the way the measure and weight by social grade. We will continue to look into these areas in 2025 and, as ever, will keep our methodology under constant review.

You can read our full post-election methodology report here.