I’ll let you into a secret – maths was NOT my strong point at grammar school. I never really got the hang of algebra, and the maths teacher I had for several years did little to stimulate interest and understanding in his students. However, going to college changed all that. We had a young, inspirational lecturer for statistics, and suddenly the fog lifted, and I got the point of it all. Enough to say that I achieved a high mark in a maths related field for the first time in my life. I was weaned on texts such as Moroney’s ‘Facts from Figures’, and for research methods, Moser and Kalton’s ‘Survey Methods in Social Investigation’. 

So, when I first worked in market research, as part of Fred Johnson’s team at the Gas Council, I never thought of myself as a statistician, but I felt I knew the principles and theory of sampling – high quality research design being crucial to the major studies we used to help our internal clients forecast demand, such as the Peak Load Survey for energy consumption and the then AGB Home Audit for measuring appliance ownership. 

Then, further on in my career, running market research at the AA, quality survey design was crucial for measuring demand for roadside services, new products and developing the Members Satisfaction Index that transformed the quality of service delivery in the mid 1990s. Quality research design was also crucial to measuring trends in motoring expenditure, published each quarter in ‘Drive’ magazine, and in supporting the AA’s public policy activities – including the replication of the seminal study by Colin Buchanan, ‘Traffic in Towns’ (1963) which we undertook in the 1980s. Representativeness was key.

Not that the Gas Council or the AA were, or are, in any way unique in demanding high quality in research design. It’s simply that these experiences led me to appreciate the importance of quality in sample design, and the importance of representing a given population if the findings were to have any credibility or validity. These studies were based on personal interviews, or postal surveys.

At one time, interviewers often knocked on my door; recognisable agencies ‘phoned me for my opinions. I can’t recall the last time I saw a face-to-face interviewer going about their business, and the only time in recent years I’ve experienced genuine ‘phone surveys is in follow-up studies after my car has been serviced! As for receiving a mail-survey...

A webinar on Margin of Error (MOE)

I guess you are wondering where on earth this is all leading to? It’s the experience of joining the Peanut Labs webinar on the evening of January 28th. The topic was ‘Using Margin of Error with non-probability panels: An #MRX Debate’, chaired by Annie Pettit (Chief Research Officer Peanut Labs) the panellists being, John Bremer (Toluna), Nancy Brigham (Ipsos Interactive Services), Steve Mossop (Insights West) and Trent Buskirk (Marketing Systems Group). It was like entering a parallel universe!

Did I really hear that MOE stats are produced for non-probability sample based studies because the client demands them as a basis for judging quality - and if not provided, the business might go elsewhere? Is it true that when MOE was provided in the past by one leading agency for non-probability based ‘phone studies, the caveat effectively stated that the calculation was in fact rubbish – but the client expected to see it? Journalists also apparently derive comfort from MOE when ‘assessing’ the quality of an opinion poll result – the media want a confidence measure, so give them MOE to keep them quiet? Even though as one panel member claimed, journos don’t understand how panels and polls work.

But why is this, or any other, quality measure needed, if no one really understands what it might mean? Does it mean we are not trusted – probably not. We know from other evidence that trust in market research is none too high (see the results of the latest GRBN survey on trust). As one panel member stated, users want one figure to judge quality, which sounds just like the justification used for the Net Promoter Score (billed by Reichheld as: ‘The one number you need to grow’, Harvard Business Review, December 2003) to keep measuring customer attitudes as simple as possible for boards to digest.

Welcome to the age of internet based quantitative research; probability samples are dead (outside of the public sector and major market measurement surveys); where, as was admitted by the panel (but we know this already), after over ten years of access panel based studies, there is still no agreed quality measure in the field. The panel expressed the view that the trade and professional bodies in the field of research were remiss for not addressing this crucial issue (at that point, I mourned the demise of the Research Development Fund in the UK, there for researching exactly that sort of issue), but, they also agreed that to try and agree an industry standard might be like herding cats! Surely, it is in every panel owner’s interest to find a solution(s), or is that a naïve thought? As one panel member mused, ‘are we selecting respondents, or are they selecting us?’ I think in the case of access panels and how membership is marketed to the public, the latter is the case. One panel member felt that sector bodies needed to be more effective in identifying and publicising bad practice – but who’s to judge the good from the bad, especially as agencies can be very coy about how they really operate (claiming ‘commercial confidentiality’)? 

And, to be fair to the panel, there was also serious discussion of developing possible solutions, and the challenge of not overcomplicating the situation, and confusing the client. There was a call for educating clients and the media to think differently – that, in my view, is an unattainable dream. However, we cannot behave like the rabbit caught in the headlights. We are, hopefully, the experts, and it’s our job to find a solution that can be readily understood by users, if we are not trusted and if that’s what’s deemed necessary. After all, we have been able to justify quota sampling from a quality perspective, and addressed the sampling challenges posed by the shift to land-line ‘phones in the 1980s.

Solutions discussed were the credibility interval measure, or a Methodology Statement (which reminded me of the agreement to long complex privacy policy statements required when buying products and services). Developing a measure based on effective sample size, to adjust the simple MOE calculation was also suggested, so that users become more aware of the importance of good quality, fit-for-purpose, sampling methods. The importance of context was also mentioned in defining quality, but this could be difficult – does an access panel provide samples representative of any given target population, or does it only represent those deciding to join a panel? And the world is getting more complicated with multi-sourced data, and blending.

Yes, the world of research is very different today than the one I described at the start of this piece; maybe we need a new paradigm for quality; maybe, as one panellist thought, we need a bridge from the old to the new world. There seems to be a widening gap between acceptance quality in commercial market research and where research is conducted for the government and public sector (see my Editorial item, ‘Web surveys of the general population: a robust alternative to current methods?’, IJMR 56/4, 2014). What is not in doubt is that we have a long way to go; a lot to do. But, it seems to me the need for a solution is urgent. So, who is prepared to grasp the nettle?

Click here to hear the whole webinar

MRS Conference: IJMR debate

For those of you attending the MRS annual conference in London on March 17/18th, there’s another opportunity to join in a discussion on sampling. IJMR is hosting a debate on the afternoon of the 18th, ‘Fit-for-Purpose Sampling in the Internet Age’. 

Reg Baker (Marketing Research Institute International, USA) leader of the AAPOR task force, Co-Chair of the group that is finalising the ESOMAR guidelines on internet sampling and Chair of a new AAPOR task force on survey methods, will be a speaker in the session, plus Corrine Moy (GfK) and Doug Rivers (YouGov). The session is being chaired by Adam Phillips (Real Research), Chair of ESOMAR’s Professional Standards Committee. Full details can be found here

We can’t promise to give you a solution, but you will leave the session much better informed on current practice and the options available for delivering credible findings in an era when budgets and deadlines are tight, and technology promises low cost, quick solutions.

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
0 comments

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.