Simon Atkinson provides commentary on a seminar held at the Royal Statistical Society, 8 December 2016, which was jointly hosted by the British Polling Council, the Market Research Society and the National Centre for Research methods.

The seminar was held to identify the main lessons learned from polling conducted in the UK during the referendum on EU membership. Speakers explored the approaches adopted by the pollsters and provided independent evaluations of the methodologies used.

The event was chaired by Sharon Witherspoon (Academy of Social Sciences), the speakers being: John Curtice (BPC President and University of Strathclyde), Ben Page (Ipsos-MORI), Adrian Drummond (Opinium), Patrick Sturgis (NCRM, University of Southampton), Stephen Fisher (University of Oxford), Will Jennings (University of Southampton).

We are very grateful to Simon Atkinson, Chief Knowledge Officer, Ipsos, for providing us with the following summary of the event:

Simon Atkinson writes:

“You could have done better”. This was one of the milder charges faced by the UK’s opinion pollsters in the wake of the 2015 General Election. That campaign, of course, saw the polling numbers so far wide of the outcome that an Independent Inquiry had to be convened to look at the polls’ shortcomings and make recommendations for future practice. (1)

“You could have done better, too”. A sentiment expressed at a London event held in December looking at the opinion polls’ performance in the UK’s referendum on EU membership. Only this time, the target was more targeted at the media, as well as at “The Establishment” who simply could not conceive that a Leave victory was actually possible.

There was certainly no air of celebration in the proceedings, but what did emerge was broad agreement that the polls did a better job than in the 2015 election campaign, and in rather more difficult circumstances.

Such a conclusion does not (at the time of writing anyway) seem to be particularly accepted out there in the world at large. The received wisdom, referred to on number of occasions during the seminar, is that it was (yet another) disaster for the polling industry. What to do?

As Jane Frost of the MRS pointed out when welcoming delegates, this is a big deal for the sector. Not because of the revenue opinion polling brings to a UK market research industry, now valued at £4.8 billion: opinion polling accounts for just 3% of revenues. But the PR challenges – and the searching questions from clients – which emerge every time there is a “polling miss” which cannot be underestimated.

As British Polling Council President John Curtice said, it is not the case that the opinion polls were pointing to a Remain win all the way along. The opinion polls told the story of a very close campaign. Of 35 surveys published during the campaign, 15 had a Remain Lead, 3 had a dead heat, while 17 had Leave in front.

It is also not the case, as Will Jennings pointed out, to say that this is a feature of a broader polling industry in disarray. His survey of elections around the world finds no evidence that the average error in polls is getting worse over time.

Which brings us on to the question of how best to communicate what the polls are saying. The post-2015 election inquiry into the performance of the polls, conducted by the BPC/MRS had already recommended that further work is necessary to communicate the limitations of opinion polls – for example by highlighting likely confidence intervals. Meanwhile, the MRS has issued new guidance for best practice in publishing survey findings. In all this, the challenge being how to communicate complex issues in a simple and powerful way. UK Government guidance recommends assuming numeracy skills of a 9-year-old when getting messages across involving numbers and percentages.

The audience was reminded that it’s unlikely that newspaper clients will welcome a scenario where their pollsters insist on shaded areas of their charts to highlight the range of possibilities as to where the “true numbers” might actually lie. Perhaps individual pollsters should be bolder and, simply say “we don’t know” when asked to opine on the outcomes. Would the public thank the pollsters for adopting such a cautious approach? “You don’t know what you’re doing” being just one potential reaction.

***

The 2016 experience may not have been a disaster, but there was still plenty of opportunity for sober reflection. On the eve of voting, more polls pointed to a Remain victory than a Leave win, and of course the shadow of the 2015 polling experience still resonates.

“With the benefit of hindsight”. A phrase used on more than one occasion during the course of the afternoon. Pollsters are like French vignerons, tending their plot of land. They are experienced in working on their own terroir, using the grape blends they are familiar with and the skills they have developed over time. Some years are of course better than others, and sometimes they have to rethink their methods. But the broad principles of what they are working with generally remain consistent over time.

This time it was different. The EU Referendum saw the pollsters operating on different territory – terrain which is both unchartered, and at the same time will never be seen again.

On the in-tray: How to deal with likely turnout? What about the don’t knows? Do we need a “squeeze” question. How to understand an electorate which is not being offered a “left-right” choice, but something which cuts across party lines?

Take the example of Turnout. Even if we achieve an amazingly representative national sample, we still need to adjust our findings so they represent those who actually will make it to the polling stations. The 2015 General Election experience had highlighted this as one of the critical things to get right – many pollsters saw their numbers hit by (even) lower than expected turnout among likely Labour voters.

But what of this referendum? An April 2016 survey of pollsters and assorted experts resulted in a “predicted turnout” of 60%. In the end it was 72%. Many of the polls – which of course always have a higher proportion of voters in their samples – pointed to a high-ish turnout. But there was still the challenge of coming up with a model that would work in these unique circumstances. For example, we heard that new voters (who gave the 2015 general election a miss, but turned out at the EU Referendum) split 60/40 in favour of Leave.

Alongside this were reflections on the sampling issues and the benefits of different modes. In contrast to the 2015 election, the internet did better than phone this time, while the experiences of those moving to a more “purist” random sampling method, as advocated by many following 2015, raised more questions than they answered.

As you may have gathered by now, those looking for a detailed exposition of how newer techniques like neuroscience or social intelligence could be applied to polling would have come away disappointed. The focus was very much on the horse-race and the art and science of trying to get the final polls (and associated projections) right. And it is still the survey that is very much the vehicle under scrutiny for doing this.

In this context, the need to learn from 2015 – and to adapt to these unique circumstances – meant that methodological plurality was a real feature of this campaign. By the time of the final referendum polls, 5 of the 10 organisations were weighting by education, 3 of the 10 by attitudes and 8 out of 10 by past vote. The audience heard some fascinating expositions of how questions relating to (for example) social conservatism, education and media consumption now need to be factored into the practice of opinion polls.

And what of the role here of human agency - in the form of the pollsters themselves? Well, they were the first to acknowledge that this was an occasion when their adjustments and models tend to make things worse rather than better. Which brings us to a fundamental question which faces many market researchers: are these numbers good enough? And what can I do to make things better?

One note of encouragement I took away was the fact that the industry – represented here by the opinion pollsters – is still able to do a pretty good job at measuring what a population at large is thinking. We are all grappling with issues to do with respondent engagement and participation. But the battle is not lost. The raw numbers were not bad, the polls concluded that it was basically a toss-up, and they weren’t far off the mark - a pattern which appears to have some parallels across the Atlantic in the pollsters’ fairly accurate assessments of the US national vote.

Simon Atkinson, Ipsos

NB. IJMR has recently issued a Call for Papers on the topic ‘Challenges in accurately measuring public opinion’. You can find the Editor’s blog launching this, with a link to the Call for Papers here.

There will also be an IJMR hosted panel on the same topic at MRS Annual Conference - Impact 2017 - in March.

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
0 comments

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.