Geodemographics - blogs and resources
Visit the Geodemographics Knowledge Base (GKB) for expert blogs and links to useful sources of geodemographic data and knowledge.
In the race to harness the power of generative AI, the MRS CGG's latest report, Inaccuracies of Generative AI Based Search Tools for Extracting Data, authored by Jaan Nellis and Peter Furness, offers a timely and sobering reminder: these tools are only as good as the prompts we give them, and the scrutiny we apply to their answers.
Here we discuss the key take outs from the report:
Large Language Models (LLMs) like ChatGPT are designed to sound confident. That’s part of their appeal. They generate fluent, well-structured responses that feel authoritative. But as the MRS CGG report demonstrates, this confidence can be dangerously misleading. When asked a simple UK statistics question, multiple LLMs gave plausible-sounding answers, all within expected bounds, but few were correct, and none were consistently accurate.
The problem? These tools are trained to predict language, not truth. They don’t “know” facts in the way a human expert does. They generate responses based on patterns in data, not verified evidence. And when users assume that a confident tone equals correctness, misinformation can spread unchecked.
The report’s core message is clear: the quality of AI output depends heavily on the quality of the input. Vague or poorly framed prompts lead to vague or misleading answers. Specific, detailed prompts (especially those that include context, constraints, or desired formats) are far more likely to yield useful results.
This is not just a technical issue. It’s a human one. Users must learn to “think like a prompt engineer”, to ask better questions, anticipate ambiguity, and guide the model toward clarity.
Even with a perfect prompt, the report warns against blind trust. AI-generated answers should be treated as starting points, not final conclusions. Always check the sources. If the model cites data, verify it. If it makes a claim, trace it back. The risk of using false or outdated information, (especially in fields like market research, policy, or public communication) is too high to ignore.
Another critical insight from the report is that many generative AI tools are trained on supranational datasets, broad, global information that may not reflect the nuances of national or local contexts. For UK-specific data, this can be a major limitation. The report calls for more rigorous testing of these tools on internal, national datasets before they’re used in decision-making.
The MRS CGG report is not anti-AI. On the contrary, it recognises the enormous potential of generative tools to support research, summarise information, and enhance productivity. But it also insists on a more mature, critical approach to their use.
If we want better answers, we need better questions. And we need to stay curious, sceptical, and evidence-led... even when the machine sounds sure of itself.
|
![]() |
Visit the Geodemographics Knowledge Base (GKB) for expert blogs and links to useful sources of geodemographic data and knowledge.
Our newsletters cover the latest MRS events, policy updates and research news.
0 comments