'Landmark papers' is a new feature of this website which highlights a particularly noteworthy paper from the extensive IJMR archive. These papers will be selected on a quarterly basis.
The Data Reduction Approach to Survey Analysis[Download PDF] Martin Collins This paper illustrates the application of the principles of 'Data reduction', expounded by Ehrenberg, to the analysis of survey data. A single table with a simple structure is used as an initial demonstration. There then follows a detailed description of the analysis of a major survey with a more complex data structure.
Qualitative market research: a conceptual analysis and review of practioner criteria[Download PDF] John Colwell Despite the commercial success of qualitative market research, doubts have been expressed from time to time about the competency of some practitioners. The dilemma for a client is that no obvious external criteria exist for evaluating quality of research, referred to in this paper as the 'yardstick problem', and so incompetent practitioners cannot easily be identified. An attempt is made to establish practitioner criteria, first through an examination of the nature of qualitative research, and secondly by assembling information from a search of the relevant literature. Implications for practitioner selection and for a raising of standards are discussed in the context of this analysis.
Can we at last say goodbye to social class?[Download PDF] Sarah Brien and Rosemary Ford This paper looks at some of the current ways of classifying people and arose out of a piece of work which Granada Television commissioned from BMRB at the beginning of 1987. Granada were concerned about the increasing media coverage given to the so-called North/South divide and wanted to explore all aspects of people's lifestyles and income both in the North West and in the country as a whole. The paper first considers the discriminatory powers of the different classification variables. For the second stage the original respondents were re-contacted to explore the replicability and stability of these variables.
The utility to market research of the classification of residential neighbourhoods[Download PDF] Ken Baker, Colin McDonald and John Bermingham It would be true to say that there is a potential conflict between pure research ideas and the costs of carrying them. There would be considerable gains in using an unclustered simple random sample of face-to-face interviews, especially when measuring highly geographically clustered variables. However this would mean for a sample of 1,000, having 1,000 interviewers carrying out just one interview in each of 1,000 places (sampling points). Traditionally in market research we use two-stage sampling with say 100 interviewers in 100 separate sampling points carrying out 10 interviewers each. The cost savings are obvious, but how does this affect the precision of the survey results? Although the theory is outlined fully in splendid texts on sampling by Kish and Cochran, it seemed to the author of this paper that the implications were not fully understood by practicing market researchers. In 1973, BMRB inherited the contract for the National Readership Survey (the contract occasionally changed hands in those days). At about the same time a van load of hard copy volumes of data arrived in Ealing. It was BMRB's copy of the MRS commissioned analysis of the 1971 census, containing vast arrays of census cotmts at ward and parish level. Various members of BMRB undertook the laborious task of transferring what we thought were key measures (percentage of social class I and II, population density, tenure type, car ownership, etc.) onto punch cards. After this task was complete we had a sampling frame of 10,000+ cards representing each of the wards and parishes in Great Britain, which of course was computer organisable in any way deemed suitable for the job in hand. This was soon applied to the design of the National Readership Survey, BMRB's omnibus surveys and a series of ad hoc surveys, and it produced more stable data than was produced by any of BMRB's previous master samples. It is likely that this was the first nationwide computer automated sampling frame produced in the industry and, hardly surprisingly, BMRB were hooked on computer-based census systems by the mid-1970s. By early 1978 we had heard of the work of Richard Webber at the Centre for Environmental Studies. In those days computing power beyond the scope of a market research company was necessary to produce a cluster analysis on 40 census variables for each ward and parish in Great Britain, but that was what Richard, using Government computing resources, had produced. Within a few minutes of listening to a lecture given by Richard it became obvious that something remarkable had been produced. The 36 cluster solution, entitled 'A classification of residential neighbourhoods', was much more full-bodied than the BMRB system. In particular, relatively prosperous council estates could be differentiated from estates of high stress, fashionable, innovative inner city areas could be differentiated from more conservative surburbia - and so on. For the first time, the sampler could visualise the areas sampled. Within a few days BMRB had committed themselves to invest the princely sum of £160 for a classification. As soon as practicable a computerised sampling frame based on the classification was produced, and this provided the most stable series of samples - and over the years such systems have become the standard in market research. However, whilst the sampling frames were being produced the task of backcoding the 1978 TGI - adding the 36 cluster solution - was undertaken. By autumn 1978, for the first time various members of BMRB were looking vast quantities of data in a way never previously visualised. Some of the juicier findings are contained within the paper, but suffice it to say that it became apparent that here we had a system that could revolutionise marketing particular, whenever the question 'where should I?' was asked. Shortly after the 1979 Conference paper, BMRB learned of the pioneering work of CACI which was beginning to market computer analysis of census information for any shape, size of area requested by the client. It was clear that CACI has the final piece of the jigsaw - adding the classification (by then know as ACORN) to their list of census variables rendered such systems totally actionable and by late 1979 the geodemographic industry had been well and truly born. In the late 1990s geodemographic systems proliferated and it is likely that they will still play a major part as a building block in systems of targeting - even in the days of giant databases and integrated targeting methods and, as a system for generating highly sophisticated sampling frames, is likely to provide a very stable underpinning to the market research industry for some years to come. Looking back at the 1979 paper, the pleasing thing from the authors' point of view is how little of the paper would have been altered with the benefit of hindsight. Geodemographics, after all, are highly practical examples or sophisticated simplicity and we believe that this underlying simplicity has contributed majorly to their success.
Improving the interface between the profession and the university[Download PDF] Miriam Catterall and William Clarke Market research is all too often portrayed in universities as a largely technical activity that services the information needs of marketing clients. This portrayal does not adequately reflect its applications beyond the marketing department or its impact on wider society. Practitioners and academics, working co-operatively, need to identify how the current curriculum might be broadened to incorporate more innovative and holistic perspectives and to regenerate academic research in market research.
Testing Nine Hypothesis About Quota Sampling[Download PDF] Elinor Scarbrough and Catherine Marsh Hypotheses about bias in quota samples, drawn from previous literature, are tested with data from a survey in which half the interviews were secured by random sampling and half by quota. A number of large differences were found between the two samples, indicating the need to refine received wisdom about quota samples locating the accessible and to reconsider the social composition of quota samples. Recommendations are made about methodological research and the conduct of fieldwork in social surveys.
The paradox of memory in market research[Download PDF] Martin Simmons and Henry Durant The results of modern psychology on various aspects of memory - recognition, retention and recall - are all of interest but of varied practical use to market researchers. The problem of respondent memory in market research is illustrated from behaviour studies on readership and purchasing habits. Three alternatives are available to the researcher in handling memory. He can devise techniques which eliminate any reliance on memory. He can attempt to ensure its accuracy by reducing reliance on memory. Alternatively, by the use of aids and questionnaire techniques, he can stimulate memory to the limit which still returns accurate results. The clear advantage is that as much information as possible is extracted at the interview. This paper illustrates the use of each approach - elimination, reduction and stimulation. The advantages and drawbacks of each are examined. Some new ideas and developments are raised for discussion. Conclusions are drawn for the future about the correct handling of memory.