In 'We can do better' - the Viewpoint in our latest issue (Vol.56 Issue1) - Reg Baker addresses a question posed for a panel he served on at last September’s ESOMAR Congress: 

“Do we need to get over ourselves and stop worrying too much about representativeness, as opposed to delivering new insights?” 

To do so, Baker turned to the findings from the AAPOR task force that examined the reliability of non-probability sampling methods, commonly used in online quantitative research (see www.apor.org for the full report and my Editorial in IJMR Vol. 55 Issue 4 for key findings). 

Baker advocates that covariates can help provide what he calls the ‘secret sauce’ in increasingly reliability without compromising speed and cost. 

However, he concludes that: ‘Arguably the most troubling aspect of online research as currently practiced is that we often have no real idea of the biases or how good our numbers are. Where once we damned survey results with the faint praise of being “directional” we now accept them as accurate representations of the marketplace. 

'Worse yet, too many in our industry do not understand the difference. We can do better. And we should.’

This is a very troubling and damning conclusion for a sector supposedly based on applying scientific and statistical principles. How have we got here? I admit that I don’t have the answer, but I can suggest some clues.

In 2008, Kim Dedeker, when at P&G, said in Advertising Age (16-09-08): ‘Without transforming our capabilities into approaches that are more in touch with the lifestyles of the consumers we seek to understand, the consumer-research industry as we know it will be on life support by 2012’. 

Well, that was two years ago; the roof has certainly not fallen in as yet (and Dedeker later moved to Kantar). But, transformations have, and are, taking place, especially in using social media to get closer to consumers. 

As Mike Cooke, GfK, said at around the same time: ‘…we are becoming a listening economy, and while the future of market research is bright, it will be different’ (Conferences Notes, IJMR Vol.51 Issue 4).

Has ‘listening’ become the new focus in market research? 

But what are we listening to? A lot of listening seems to be algorithmic based. James Ball argued (‘From guessing game to tyranny’, The Guardian) that: ‘In the world of big data, this kind of relationship has a huge amount of power: once we know two things are associated, we don’t need to know why’ (the emphasis is mine). 

Well, any market researcher worth their salt should know that is simply not true! 

Paraphrasing what David Brookes argued last year in What data can’t do, New York Times algorithms: cannot identify quality; don’t understand social relationships & interactions (analogue communications); can create spurious relationships; struggle with context, narrative & emergent thinking (humans do this on autopilot); obscures values due to mediation; promotes ‘memes over masterpieces’ (misses creativity & originality) – and I would add, can’t address the vitally important ‘why?’ in human behaviour (insight?).

But the question I’m raising is whether we are in danger of being seduced by numbers, and tending to ignore the reliability of the source – whether or not it is ‘fit for purpose’. 

And that’s the second time that ‘insight’ has popped up in this blog - but is the thirst for insight also part of the problem? 

Jeremy Bullmore’s insightful (!) comment: ‘Why is a good research report like a refrigerator? Because every time you open it, a light comes on’, seems dated in a world where reports, which provided a context for each finding and traditionally contained a detailed technical appendix, have been replaced with sound bites and visualisation, under the broad heading of ‘insight’. 

Essential details of the ‘how’ may not be apparent. 

I’m also confused about what an ‘insighter’ does, compared to a ‘market researcher’, so I looked at a few jobs advertised under the general heading of ‘insight’ to get some clues. They contain phrases such as ‘delivering actionable insights’; ‘value added insights’; ‘deliver impactful customer insights’; designing exciting research projects that deliver true insight for clients’; able to deliver fresh and compelling insights with business critical implications’ – these seemingly meaningless, jargon-ridden phrases leave me none the wiser (and I could be accused of cherry-picking to make a point!), but whilst MR has been criticised in the past for delivering findings that either don’t really address the business issues, or stimulate action, is the pressure to produce nuggets of golden insight, perhaps at times over-ridding the need to delivery reliability? 

Can the balance between quality and cost/timeliness tip in favour of the latter when the pressure is on? 

Sure, cost and timeliness are vital ingredients when commissioning research, but the context is critical – what is the destination of the outputs: is the decision the research is supporting of low-value to the organisation, or are big-bucks investments riding on the outcome? 

‘Fit for purpose’ should be the mantra, whatever the source of the data, not the pursuit of that golden nugget insight wherever it comes from, but as Baker warns, we need to be sure we know when data is ‘fit for purpose’ and when it isn’t. 

How do you define 'fit for purpose'? Add your comments below.

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
0 comments

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.