'The Science of using Science' - This is the title of a new discussion paper recently published by The Alliance for Useful Evidence. The principle underlying the paper is that simply providing access to evidence does not mean that it will be used. Think of all the hundreds of scientific journals regularly publishing peer reviewed papers, plus other sources of quality research, all clamouring for attention. 

How do authors create the optimum conditions for their piece of evidence to be included on a shortlist that is being complied either by other researchers, policy makers or anyone seeking to ensure that they can produce a cogently argued, evidence-based input to improve the quality of a decision?

If you’ve never heard of it, the Alliance is a UK based partnership comprising the Big Lottery Fund, the Economic & Social Research Council (ESRC) and Nesta (an innovation charity with a mission: ‘To help people and organisations bring great ideas to life’) that was formed to champion the need for useful evidence and: ‘provides a focal point for improving and extending the use of social research and evidence in the UK’. For ‘social research’, read ‘research’ in general as this new discussion paper contains valuable, and actionable advice to anyone seeking to ensure their work is effectively applied. I commend their paper to you all. 

As you will see, the paper is produced jointly with the Wellcome Trust, the WhatWorks Centre for Wellbeing, and the EPPI-Centre at University College London (UCL) and is essentially a meta review of what others consider forms a sound foundation for effectively promoting a piece of research. 

The paper is based on two phases: a Systematic Review of Systematic Reviews; a Scoping Review of other social science based interventions. The main aim was to discover if research was being used, not whether it had made a difference, and, whether the research claim that the changes identified was linked to using more evidence.

The paper is based on applying a framework comprising six categories of evidence-use mechanisms, which as the authors say are often combined and used together:

  • Awareness – building awareness and positive attitudes towards evidence use
  • Agree – building mutual understanding and agreement on policy-relevant questions and the kind of evidence needed to answer them
  • Access and Communication – providing communication of, and access to, evidence
  • Interact – facilitating interactions between decision makers and researchers
  • Skills – supporting decision-makers to develop skills accessing and making sense of evidence
  • Structure and Process – influencing decision-making structures and processes.

Each section is supported by illustrative case studies showing best practice gleaned from around the world.

In conclusion, the authors state how surprised they were with the diversity of ways they found being used to encourage the use of research by decision makers, from the over 150 interventions they studied – including Delphi panels, redteaming and dogfooding (you’ll need to read the report to explain the two latter terms!). 

However, their main conclusion is that more impact evaluations and clarity are needed to increase evidence-use, and more research into the researcher-user interface. And of course they recommend using control groups to provide a more robust assessment of what does, and doesn’t work, plus cost-benefit analysis to assess added value. 

They also stress the need for more research into what constitutes reliable, and relevant evidence. Finally, they point out that many of the cases/methods studied came from the health sector. Obviously, they would like a wider range of studies from the wider social policy field, but I would encourage readers to think of examples from the commercial research sector. 

I would love to publish in IJMR examples from the field of market research, perhaps as Forum articles.

This is a ground-breaking report, as the authors’ state, and I believe that researchers in market research will find value in the findings, and examples of what works best. We need to learn from the experiences in other sectors and benefit from them.

P.S. You might also be interested in another report published earlier this month, Missing evidence: an inquiry into the delayed publication of government-commissioned research which includes the following principles within the recommendations:

1. Prompt and full publication of government research is a matter not of contract but of public duty. While research contracts will necessarily vary in their provisions, all contracts should spell out this obligation of principle, which reflects the departmental rules governing external research.

2. No redactions should be made in published research except for verifiable legal or security reasons such as data protection or national security.

3. Save in wholly exceptional circumstances, publication of external research should precede or, at latest, accompany promulgation of any policy initiative which is presented as dependent on it, or which clearly is intended to be.

4. Conflict with current or impending policy initiatives is not an acceptable reason for delaying or withholding publication of external research. In such situations government should be prepared to publish its reasons for disagreement and let any debate be aired in public.

5. Government and researchers should adopt more of the practices described in the preceding paragraphs in commissioning, conducting and communicating problematical research.


How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.