Is Your Research Lying?
Consumer research is dangerous.
We all know the upside — how research can unearth insights to transform a campaign. But let’s pause for a moment to consider the risks: How well-meaning research can also shortchange new ideas, reinforce bias and mislead us with meaningless artifacts and fake precision. Here are five ways to filter out the fake.
- Dismiss false precision. Should we care if five members of an eight-person focus group favor one idea over another? The sample is too small to be representative. Yet I hear researchers declare “winners” or translate 5/8ths as a 62.5% majority. In qualitative research, your focus should not be on the numbers. It should be on spotting patterns for how people make a decision. What’s more, reporting a percentage when dealing with a sample smaller than 100 people is flat out lying. It’s fake precision. If claims are made that insignificant differences matter, that’s malpractice.
- Demand real precision. Research toplines typically boil down many small findings into a few broad take-aways to focus on the decision at hand. Always unpack these generalizations to gauge what is truly known. Did a new idea really “fail” or did we hear about an issue with the idea that led participants to dismiss it? Maybe what’s needed is a fix, not a rejection. I am always wary of the phrase, “the research says …’ Research doesn’t “say” anything; researchers do. Dig deeper: What did participants say, what was the precise wording of the question, and why does it appear the research subjects reacted the way they did?
- Consider the context. Consumer research is always artificial: It’s a set-up, not real life. Consider the impact of the contrived setting — what will people admit to publicly, what gets triggered by the wording of a question or the make-up of a group. We can get faked out by failing to recognize, for example, that people will never cite their subconscious motivations. How could they? They are subconscious.
- Don’t believe everything you hear. Research participants will always offer a logical explanation for why they do what they do. It doesn’t mean that’s true. They may be fibbing due to social pressure or they may be lying to themselves. What they say does matter, but not on its face. That’s why good researchers rely on guidance like the Behavioral Determinants Framework to interpret emotions or unconscious factors that can hide beneath the surface. It’s also why asking people to explain their own behavior is so often misleading and leads to bad campaign decisions. Better questions, guided by theory or a framework, will lead to more accurate and conclusive explanations of what’s really going on.
- Don’t just go with what testing subjects think is persuasive. This is how the creative testing gauntlet can kill breakthrough ideas: When questioned about what’s persuasive, participants select what seems to be persuasive. This can kill breaththrough ideas, especially ads that don’t seem like ads at all (often a good strategy for getting people to pay attention). Consumers may dismiss these as unpersuasive precisely because they are not what they are expecting. If your testing is well planned, your questions should track with your logic model (each step of how you expect a concept to influence behavior). So, for example, if you expect a change in social norms to influence people and you measure a change in perceived descriptive or injunctive norms, that’s much more important than whether the participants predict the ad will influence the behavior.
The bottom line here is simple: Plan your research carefully and dig deeper into the results to make sure you know precisely what you’re learning. It’s easy to assume you’re seeing something that’s not really there. And that may be worse than doing no research at all.
Peter is the founder & chief insights officer of Marketing for Change.