I’m sure you are well aware that IJMR is hosting a debate on online sampling methods on March 18th at this year’s MRS Conference, especially as I referred to it in my last blog.

Adam Phillips, a member of the IJMR Executive Editorial Board, is chairing the session and is posing five key questions to our internationally renowned panel members, Reg Baker, Doug Rivers and Corrine Moy, that need to be addressed during the discussion:

  • How serious is the sampling quality problem for online research? 
  • Are we accepting a culture of limited transparency and “caveat emptor” which risks undermining trust in the whole research industry?
  • Is the pressure for low cost and quick results leading to a serious compromise in research standards?
  • How can I judge whether findings from an online survey are reliable? 
  • How can I commission online research which is ‘fit-for-purpose’ in meeting my needs?

Here are my thoughts on why these questions need tough answers.

How serious in the problem?

Look no further than my Editorial in Vol. 56 Issue 4 reporting on the RSS symposium, ‘Web surveys of the general population: a robust alternative to current methods?’ The evidence presented there was quite definitive – if you want to represent the views of a representative cross-section of the population, then most online samples don’t deliver to the standards necessary for public sector research. We can use the internet once we have a relationship with participants, but not for that crucial initial contact. Also, the studies on replicability across internet panels demonstrate that this is illusionary – in theory, you could pick the source that gives you the best result. So, if the public sector has unease about online sampling, shouldn’t other sectors have some doubts, too? In our March issue (57/2), published later this month, you will find a further public sector example as a Conference Note from last December’s Social Research Association conference.

There’s also a common sense perspective: if samples are primarily based on self -selected participants, being asked time and time again for their views on a disparate range of topics using only the internet for data collection, doesn’t that send out some danger signals? I think it does. In IJMR submissions we worry about papers based on convenience sampling methods, but that’s what many clients are buying with online research – AAPOR & ESOMAR tell us so. We also know from ESOMAR’s annual survey that in many countries, online based research accounts for a high proportion of quantitative work.

Limited transparency?

How much information are clients given about the methods used by the different providers? AAPOR has pointed out the difficulties of deriving effective factors to correct for bias, but how do sample providers re-assure clients? We also saw from the recent webinar that currently there are no accepted quality measures for online quantitative research. When provided, to assuage the demands of clients, those given may be statistically meaningless. Then there’s the role played by the black-box routers used to select the samples. How exactly do these work, and what impact do they have on sample provision in access panels? Compared with the development of fixed-line telephone research, there’s been relatively little exposure to debate about the methods employed in research on the internet.

Cost and time pressures?

The GreenBook annual survey has demonstrated in the past the divergent views between clients and agencies on this one. Clients argue they are looking for the best ‘fit-for-purpose’ research solution, but agencies argue that cost/time are the real key drivers behind client need. We live in a world where the internet provides instant answers to the questions we have – just ‘Google it’. So why doesn’t this apply to the questions we want answered via research? And cost as an issue never goes away in a world where pressure on the bottom line is so intense. But, I would argue, it’s no longer just a quest for a decent level of profit, it’s all about maximising profit, whatever the cost. It’s evident, for example, in zero-hours employment contracts. Yes, cost should be an important issue, but are the principles of value-for-money, and fit-for-purpose no longer the driving forces when commissioning commercial research? Sometimes we need a degree of patience; clients need to really consider if a bit more time spent on doing the job better might be time well spent in the longer run. Is the scale of the decision that underpins the desire for some research matched by an appropriate investment in that research? So, is it that clients would like to be viewed as thoughtful, knowledgeable buyers of research, who then show their true colours when the proposals come in, or, are agencies seeing the emperor’s new clothes, knowing that cost/time are in fact the key criteria, no matter what the client would like them to believe, and therefore simply cut to the chase in their proposals? Philip Graves in ‘Consumer.ology’ berated the market research sector for ignoring context and environment when conducting research when proposing his AFECT framework – does high speed low cost internet based research ignore these important considerations?

Reliability?

This is surely the key factor. What is the point of commissioning research unless it delivers findings that can be trusted? As described above, replicability is a big issue. However, it is often very difficult to really assess the ultimate reliability of findings – for example, is what people say they will do, borne out in subsequent behaviour? We know this to be a big issue, but why exacerbate this situation through sampling methods that are sub-standard with questionable statistical credentials?

Commissioning ‘fit-for-purpose’ internet research?

The objective of this debate is to provide answers to this final question. Maybe, ‘internet’ is not the right solution, anyway – as many researchers in the public sector believe. Maybe, if enough clients demand a better standard we might see an increased demand for internet surveys based on random probability selection methods, with recruitment undertaken through a different channel than the surveys. Agencies argue that demand outside the public sector is unproven, with clients apparently happy with the current research product, despite the evidence that suggests a need for caution.

In conclusion, I’m not saying that all internet quantitative research is based on flawed sampling methods, but the evidence suggests there are serious concerns about quality that need to be addressed. In addressing them, we need more debate of the issues, increased transparency, some research into research methods, and sometimes, to stand up against the ‘need it now, need it cheap’ mantra that may not deliver a ‘fit-for-purpose’ research solution. We hope the IJMR debate informs delegates, and helps shape future actions to address the issues.

So, if you are attending the MRS Conference on March 18th, do join us in the afternoon for this important debate. However, if you’re unable to attend the conference, we’ll be providing a detailed write-up in IJMR.

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
0 comments

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.