Papers based on attempts to replicate studies that have already been conducted and published are often unwelcome by journals. We published a Viewpoint arguing the dangers of such a policy way back in 2011, ‘Publishing replications in marketing’, Mark Uncles (IJMR Vol. 53 Issue 5). Uncles stated that: ‘Replication is a pillar of normal scientific investigation….’, and he asks ‘why should a manager put much (any) reliance on a result that hasn’t been replicated and that may never occur again?’ 

In effect Uncles was responding to my earlier comments in my Editorial in the previous issue on the importance of replication having recently reviewed ‘Persuasive Advertising, by J. Scott Armstrong who pointed out that the findings in 60% of replication studies conflicted with the findings from earlier studies and only 1% of findings in marketing journals having ever been successfully replicated. 

I also drew on the views of Ben Goldacre, citing a case where a journal in the field of psychology had refused to consider publishing a study that conflicted with the findings in an earlier paper published by that journal as it never publishes replications.

So, a recent headline ‘60% of psychology research ‘cannot be replicated’ (The Guardian, 28th August, 2015) was very likely to catch my eye. In this case, the finding came from a study conducted by 270 scientists who had repeated 100 experiments that had originally been undertaken across five continents. 

The roots of the study were increasing concerns about the replicability of studies from the field of psychology (Nosek et al, ‘Estimating the Reproducibility of Psychological Science’, Science, Vol. 349, No. 6251, August 2015). All the experiments had been published in top ranking journals related to this branch of scientific research. 

The Reproducibility Project was an example of community-based crowd-sourcing, with scientists selecting projects from a pool that corresponded with their interests and expertise. They found that surprising results were the hardest to replicate, and that the experience and expertise of those conducting the original experiments had little to do with replicability. 

However, the findings did provide comfort for those who believe that the ‘P value’ statistical test is a useful measure of significance, as the analysis showed that a low p value was fairly predictive of those studies that could be replicated. The question was also raised as to whether psychology poses difficulties that might not be present for other sciences, and a similar study is underway on cancer related biological research.


However, despite the undoubted value of this study in providing a warning to scientists in any field, including marketing I guess, a co-author of the research, Cody Christopherson, is quoted in a blog on the study by Brian Handwerk as remarking that: ‘to get hired and promoted in academia, you must publish original research, so direct replications are rarer. 

I hope going forward that the universities and funding agencies responsible for incentivizing this research – and the media outlets covering them – will realize that they’ve been part of the problem, and that devaluing replications in this way has created a less stable literature then we’d like’.

So, nothing has changed! Much of what is published as revelatory will never be retested to assure research of its reliability. In the real world, funding models for academia, and career progression for academics, acts against the interests of confirming whether or not experiments produce valid, reliable results. The hope is that by conducting and publishing the results of such studies it encourages scientists to up their game.

As Uncles concluded back in 2011: ‘replication is about weighing up the body of evidence to reach an informed and considered judgement’. The AAPOR study on non-random sampling methods included a concern about the lack of hypotheses testing in market research, and, increasing competitiveness and commercial confidentiality does not lead to a culture of openness and transparency in methodological development. 

However, I believe that clients deserve to be able to check the ‘fit-for-purpose’ of a proffered methodology, and replication studies help provide confidence and build trust. We welcome replication studies in IJMR as Uncles believed all good journals should, but they remain as rare as hen’s teeth!

How to access the International Journal of Market Research (IJMR)

Published by SAGE, MRS Certified Members can access the journal on the SAGE website via this link.

Go to the IJMR website A white arrowA black arrow
0 comments

Get the latest MRS news

Our newsletters cover the latest MRS events, policy updates and research news.