The following is a summary of a more detailed analysis, and covering wider issues, which will feature in my Editorial in issue 57/5 (September).

As many of you will know, the British Polling Council (BPC) and MRS have launched an inquiry into the performance of the opinion polls in the UK preceding the May general election. A distinguished panel of experts has been appointed, chaired by Patrick Sturgis (U. Southampton and Director of the National Centre for Research Methods). Key differences between the this inquiry and the one set up by MRS in 1992 are firstly, the 2015 panel is totally independent from the polling sector, comprising mainly academics (see the BPC website for details), whereas in 1992 leading pollsters predominated. Secondly, the final report was not published until July 1994 (with an initial view by June 1992), but the latest panel hope to publish their report in early March 2016.

Initial open meeting

The BPC/MRS hosted an initial open meeting, run by the National Centre for Research Methods, on the afternoon of June 19th, held appropriately at the Royal Statistical Society in London, and on the day that the possibility of a Bill in Parliament to limit polling in the run-up to future elections was mooted. The agenda for the meeting mainly comprised representatives of each of the main polling companies presenting their interpretation of the situation (ICM, Opinium, ComRes, Survation, Ipsos-Mori, YouGov, Populus), and outlining their plans for internal enquiries. All started with a mea culpa statement, and agreed that being within ‘sampling error’ (whatever that is, or measured, in the way that samples are drawn today) was not a good enough excuse in predicting the outcome. It was a very sackcloth and ashes affair – John Curtice (U. Strathclyde), the BPC President, in his opening address, stressing the impact the polls had on how the campaign was fought, but with the caveat that any detailed analysis of this impact has as yet not emerged, and is also outside the remit of the inquiry which is focussing on methodological issues. 

Is Britain ‘a nation of liars?’

So are we ‘a nation of liars?’, as posed by Ivor Crewe in his 1993 JMRS paper analysing the 1992 situation and my April IJMR Landmark Paper selection (JMRS Vol 35 No 4; IJMR Landmark Paper). There was little current evidence to support a late swing of any significance, based on the results of post-election polls, but do the recall polls suffer from the same methodological problems as the pre-election polls?

If true, this would also tend to suggest there were no ‘spiral of silence’ or ‘shy Tory’/’lazy labour’ effects, but as the results show (see BPC website), the situation of understating the Conservative vote and overstating support for Labour, shown in 1992 and in 1970, is still at the heart of the problem. As Martin Boon (ICM) demonstrated, there had been no Tory lead in the polls since 2012.

But it’s early days as yet. Neither the panel nor the polling companies have as yet got very far in their investigations. However, whilst all the polls misinterpreted the mood of the electorate, that does not mean that there was a common view on the reasons. 

Data collection methods

In today’s world the two main options for data collection by the pollsters are by ‘phone or online.. But with the mobile revolution in full swing, landline based methods no longer guarantee adequate coverage, so mobile top-up quotas are used, but the proportions varied. As one delegate commented, this is a very dynamic statistic.

The pollsters views

The issues identified in the presentations by the pollsters were:

  • Sample bias (online and ‘phone methods)
  • Identifying the voting public and estimating likely turnout
  • Weighting methods/modelling
  • Differential turnout
  • Tactical voting
  • Factors possibly unique to this election, such as a favoured prime minster
  • Overstating Labour versus understating Conservative support

In the discussion following the presentations, some further interesting points emerged:

  • Should seat projection modelling be considered in the future?
  • The constituency level -polls commissioned by Lord Ashcroft in Scotland appear to have produced a higher degree of accuracy. Will the panel have access to these polls to identify what contributed to this accuracy?
  • Is it ‘modelling’ rather than ‘sampling’ error that lies at the heart of the problem?
  • How important is the image of the individual party leaders, and how could the impact of this be measured?
  • Should the voting intentions question replicate the voting slip content for each constituency in which opinion polls takes place?

The challenges

As I’ve already stated, the focus of the inquiry is on methodology, but interpretation and media coverage surely cannot be ignored. The current evidence to date points towards three key areas for investigation:

  • sampling
  • question design
  • weighting/adjustment/modelling methods.

But in essence this is a review of the overall research design used in opinion polling, especially the challenge of representing the voting public in rather than the general public. For example, the post-election data from ICM indicated a level of turnout (86%) well above the actual level (66%).

In summing up the presentations, Sturgis emphasised the panel’s need for detailed data and called for maximum transparency from the pollsters. This needs to include unweighted data, as in the post 1992 inquiry. As one delegate commented, this needs to include access to the party’s private polls, not covered in the presentations. Sturgis wondered whether being within ‘margin of error’ was sufficiently accurate, and pondered on the future of polling, citing the Guardian’s decision to continue polling but without publishing the results until after the panel had reported. 

There was also the concern that solving the 2015 problem may not provide a generic, long term, solution. A big challenge arising from the evidence to date is predicting turnout. The pollsters also implied that there were distinct differences in the turnout behaviour for Conservative and Labour supporters, which needs to be investigated. Changing data collection methods presented implications for cost and immediacy, which may be unacceptable, but as we concluded in the IJMR debate at the MRS Conference, this raises the issue of ‘fit-for-purpose’ in research methodology and do the current methods used for polling pass the test for sampling recommended by Reg Baker in the IJMR debate (see IJMR 57/3).

In the IJMR debate, Corrine Moy (GfK) used the metaphor of ‘fishing’ and ‘cooking’ applied to methodology. ‘Fishing’ is using prior knowledge to best effect in research design; ‘cooking’ being any post survey adjustments applied to the data. Both require a good deal of knowledge, but in the case of measuring voting intentions is this available, for example to more accurately predict turnout or the likely behaviour of those who have not voted before? The presenters on June 19th raised the challenge of factoring in likely turnout levels and the difference between representing the views of the general and voting populations. The consistent factor over the years is the under stating of Conservative support, a seemingly systematic problem.

New world; new methods?

Compared to 1992, we live in a much more complex world. So much has changed since then. Telephone methods for polling were still a relatively new methodology, but fixed line ‘phones were the only device available, with a then relatively low level of ex-directory subscribers. There was no internet, no mobiles and no social media. Face-to-face data collection was still used by pollsters. Pollsters measuring voting intentions for the Indian general election appear to have got it right for the first time, but apparently by using, and constantly refreshing, complex models. 

However, having listened to the presentations on the 19th, the issues raised look uncomfortably similar to those discussed in 1992, as recorded in Crewe’s paper and the MRS report. The context may have dramatically changed, but measuring voting intentions remain broadly as in 1992, albeit with the move by some to online with data collection. There is no doubt that the political landscape dramatically changed in 2010, and continued to do so in the run-up to this year’s election. The polls also failed to accurately predict the outcome of the Scottish referendum and there’s a further referendum on the cards for membership of the EU.

It will be interesting to see what the panel, with six members of the of nine members being academics, conclude on the whole issue of data collection methods and the commercial pressures that have increasingly led to face-to-face methods being ditched in favour of faster, cheaper options. If the NCRM proposal for a random probability based access panel ever sees the light of day, would this produce a different result for measuring voting intentions than other methods – an interesting experiment in itself.

I will provide further updates from the inquiry either in my Editorial or blogs in the coming months.

 

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
3 comments

Malcolm Rigg16 Jul 2015

Corrine's metaphor of fishing and cooking is an insightful way of looking at the issues, the difference being that researchers put the findings on the table but the dish is only eaten every five years. By that time a lot has changed. Election polling was hard enough when there was a strong ideological element and probably more consistent voting.

Peter Mouncey17 Jul 2015

The increased challenge by 2015 was at the core of the discussions at the RSS meeting. However, I think there will be important lessons for all researchers in the findings about where we are today and 'fit for purpose' research design. I completely agree with your view on Corrine's metaphor.

Malcolm Rigg17 Jul 2015

One option that should be seriously considered is banning polls for a short period . They often disagree and rarely get it right, one consequence being the effect on the credibility of polling (very useful generally) and the risk of undermining the credibility of market research as a whole. I assume that there's evidence from other countries that have banned near election day polling that could be reviewed.

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.