In a recent interview in The Guardian technology entrepreneur Vivienne Ming commented that big tech firms are entrusting some of the most profound problems in history, such as who should be recruited, how should mental illness be treated, who should go to prison and for how long, ‘to a bunch of very young men who have never solved a problem in their lives’.

Ming goes on to say that whilst AI is a very powerful tool for solving problems, ‘if you can’t work out the solution to a problem yourself, an AI will not work it out for you’. She also contends that those working in AI need a thorough grounding in ethics, but this needs to be more than learning from a book: ‘Ethics is like resilience, you get good at it by failing’.

Lack of experience in problem solving is just one key issue facing the development of AI; a further major factor influencing the design of AI tools being unconscious bias – the impact of which was also covered in a series of features in The Guardian during the first week of December. Also, read ‘The Perils of Perception’ by Bobby Duffy (Atlantic Books, 2018) for an in-depth analysis of all the factors that lead to our perceptions being often far removed from reality. As Duffy demonstrated at the launch of his book back in early September, researchers are not immune from misconception!

ASC conference

Against this backdrop, the Association of Survey Computing (ASC) conference ‘Alexa, what’s the future of market research?’ held on November 15th, provided a very interesting, and relevant, review of the impact of AI on market and social research. In the opening session, Ray Poynter (NewMR) and Rosie Ayoub (Norstat) presented the findings from an international survey of researchers and clients, demonstrating that there is still a relatively low level of real awareness of what constitutes AI and machine learning (ML) in the wider world, and the applications within research. Over half of participants mentioned chatbots, automated facial coding and sentiment analysis, whilst over 40% mentioned text analytics and the automated transcription of video.

The authors identified three key applications in research: capturing and analysing behaviour via camera devices – producing prodigious amounts of data; analysing unstructured data from open-ended questions and qualitative research; taking over the role of interviewer including the interrogation of social media data. They described the main AL/ML tools: expert systems, supervised and unsupervised ML, deep learning, and in the distant future, artificial general intelligence.

They concluded by advising those commissioning projects to ask three key questions: what AI/ML tools are being applied; has AI been used to build new tools (creating faster/bigger/cheaper, and perhaps, better solutions) or are new AI/ML tools built for each application (slower and more expensive); are the tools commonly used, bespoke or pilot developments?

The challenges of automated coding of data was covered from different perspectives in four sessions by Fabrizio Sebastiani (ISTI-CNR) – no stranger to this topic within IJMR, and a new paper due to be published next year; Tim Brandwood and Pat Molloy (Digital Taxonomy) – discussing the lessons from five projects; Dale Chant ((red Centre Software Pty Ltd) – describing the application of neural networks;  Mark Rogers (GetSentiment Ltd) and Victor Pekar (University of Birmingham) – focussing on analysing customer feedback.

These three presentations demonstrated the pitfalls in text analysis, covering ambiguity, miscoding, the importance of context, data cleansing, false positives and negatives, different forms of quantification, developing training sets. All three described the structured process necessary in making sense of text data, including the use of training sets. Further presentations by Ryan Taylor (Netquest) and Daniel Bailey (Data Liberation) discussed applications of AI to visual and audio data, citing a skills shortage and the limitations of current tools as weaknesses versus the speed of analysis leading to lower costs.

Bias in AI

One of the sessions covered the points raised in the introductory paragraph to this blog. Bethan Blakeley (Honeycomb) discussed the challenges posed in developing AI tools and solutions by human induced biases citing the gender and prejudice biases that have appeared in recruitment and motor insurance applications of AI, and the impact of false positives, for example in deciding who was fit to be released from prison. Blakeley advocated more transparency, accountability and responsibility in eliminating bias, and discussed whether AI tools could be built to detect bias and evaluate data and models.

The issue of unconscious bias also surfaced in the MRS ‘Methodology in Context’ conference, held in London on November 22nd, which featured a panel session on this topic based on the latest MRS Delphi report, ‘Deconstructing bias’, covering three main areas where such bias can appear in research: Cultural, Managerial and Data.

The debate was chaired by Tim Phillips, with a panel of Jan Gooding (MRS President), Jay Owens (Pulsar), Jake Steadman (Twitter) and Colin Strong (Ipsos). They defined unconscious bias as learned stereotypes, that impact on every-day decision making.

As Blakeley opined, prejudice is a learned trait – it is not an innate human characteristic. However, the panel pointed out the difficulties of training people to have unbiased views. Unconscious bias can be embedded within the culture of an organisation, surfacing in recruitment and marketing strategies and attitudes towards customers. It can lead to distrust, for example in police stop-and-search policy, that can damage society.

Obviously, as researchers we can help debias decision making, provided we are cognisant of our own biases, and do not become influenced by any inherent bias in the culture of client organisations that lead to bias in survey design, reporting and recommendations – perhaps even to filtering who works on a particular project.

As the Delphi report underlines, bias in data can lead to very bad decisions – and this applies to how AI/ML tools are developed and used within research. We must ensure that the builders and testers have not just the technical knowledge, but also the appropriate experience and understanding of the application context together with a strong ethical and moral perspective. This suggests a team approach, not a ‘black-box’ solution, and the use of ethical panels to help ensure new technologies are as bias free as possible. The report recommends a hypothesis approach to building algorithms, and take into account relevant legislation, such as GDPR and equal opportunities. The report concludes with a case study summarising the salutary lessons learned from the inquiry into the failure of the polls to predict the outcome of the 2015 UK general election, citing unrepresentative samples and confirmation bias at the heart of the issues. I covered this inquiry in detail in my IJMR Editorials at the time.

Unconscious bias is a pernicious trait. As the final quotation in the report warns: ‘We tend not to notice bias until we get things wrong’.

A role for IAT?

In The Guardian’s series of features mentioned earlier is an article discussing whether Implicit Association Tests (AIT) can be used to detect unconscious bias.

Replication has proved difficult and meta analyses have found IAT to be a weak predictor of behaviour. Quoting Brian Nosek (University of Virginia), one of the team that developed IAT (who questions IAT’s ability to measure a meaningful trait), Hannah Devlin (The Guardian Science Correspondent) concludes her article with a warning: ‘If you implement an intervention that doesn’t work, it can reinforce people’s beliefs that nothing will work. It’s incumbent on us to use the best evidence available’. We will be covering the strengths and weaknesses of IAT in a future Viewpoint.

Currently, IJMR has a Call for Papers on this topic,'What role can machine learning and artificial intelligence play in market research'.

Presentations from some of the ASC conference sessions can be found here.

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.