A question I often hear is ‘Are Insight Communities biased?’ The short answer is yes, but so is everything else!
However, the longer answer is more interesting and more useful. All market research is biased, for a variety of reasons (and these are outlined below), but market researchers have found ways of recognizing, controlling and working with these challenges, to produce useful and predictive research. The same is true for insight communities, as we describe below.
The Biases in ALL Market Research
The key issues are:
1. Most people don’t speak to market researchers. A recent Pew study into telephone research shows that over 90% of people reject the invitation to be surveyed when contacted. Online access panels represent less than 0.5% of the population. Face-to-face research is impossible in some areas, and suffers from very low response rates where it is still possible. River sampling, for example Google Consumer Surveys, can attract response rates of less than 0.1% (i.e. not representing 99.9% of people asked).
2. Respondents do MANY surveys. A major study by ARF (Advertising Research Foundation) into online access panels showed that many respondents do hundreds of surveys a year (more than two a week) and are members of multiple panels. When one company surveys somebody about breakfast cereals, some of the participants could have done another cereal study that week, via another panel.
3. Research participants understand marketing and research. Partly through TV programs like Mad Men, partly because so many graduates have done marketing modules at university, and because of the number of surveys and focus groups they have participated in, most research participants are aware of what research is and what it is trying to measure, creating another source of bias, since many of the research models being used are based on participants not knowing what is being probed and how.
4. Research changes the way people think. Many of the recent findings from Behavioural Economics and neuroscience show that most decisions are made at the automatic or non-conscious level, but market research typically makes people pause and think about the issues involved. People like Jonah Lehrer have shown that this changes the responses.
So, Why Does Market Research Work?
Whilst a few people challenge whether market research does work, its good track record is pretty clear. Election polling is the most public of the tests we put our tools to, and the results of election polling are right a lot more often than they are wrong (which is why the errors are such big news). Package testing, volumetrics, and most ad testing have shown their predictive worth, as has much of the strategic research that has been conducted over the last 30 years. Famous exceptions, such as New Coke, are famous because they are rare - if they were common they would not be famous.
Given that samples are not representative, responses are biased, and people are bad at forecasting their own actions, why does market research work so much of the time? I believe there are four interconnected reasons:
1. People are much more homogenous than we tend to think they are. Techniques such as Latent Class Analysis applied to choice data illustrate that in most markets there are only a few ‘types’ of consumer. If people are fairly similar (in terms of needs, motivations etc.) the rank order of results tends to be right, even when the sample does not match the whole population, as long as the sample represents a typical part of the population.
2. People tend to forecast what other people do. Although the wording of questions typically ask people to predict their own behaviour, the data they produce tends to be more predictive of the group they are in, than of their own personal outcomes. This technique is made more explicit in approaches such as predictive markets.
3. Researchers design experiments that are likely to work. Over time, market researchers have worked out the best way to recruit samples, ask questions, and interpret answers to make them predictive of the real world. Election polling and volumetric testing are excellent examples of this. For example, one of the key tools is to carefully balance samples from project to project, to hold as many variables as possible constant.
4. Research is benchmarked. Most research does not directly measure a real world effect; it measures something which correlates with the real world. Good examples are customer satisfaction and ad testing. In both of these cases, the loyalty index and ad score are not real phenomena, but increases and decreases in the scores tend to correlate with increases and decreases in real-world factors such as sales and churn.
OK, So What About Insight Communities And Bias?
Insight communities define a spectrum that runs from the smaller, qualitative communities through to the larger quant/qual communities (e.g. from handfuls of members through to hundreds of thousands).
In a qualitative community, the two key extra biases (in addition the ones listed earlier) are:
1) that the members are usually customers or users of the brand/organization owning the community, and
2) the members become very focused on the brand and community, increasing their knowledge and awareness of issues surrounding the products/services researched.
Over time, a long-term qualitative community becomes like a board of lay assessors, or a customer consumer board, becoming much more knowledgeable than the public, but less knowledgeable than the staff. These communities can be a great resource for things like ideation and the screening out of bad ideas.
The larger quant/qual communities are reported by GRIT to be the fastest growing new research method and are recognized as being suitable for a wide range of research purposes (for example ESOMAR’s Answers to Contemporary Questions highlights that there are relatively few things a community can’t be used for). The members of these communities are bound together by their interest in the topic (which usually means the brand, product, or service). As with the smaller qual communities, the members of the community are normally customers or users of the brand. However, because there are more members and because more of the projects are surveys (over 80% of quant & qual community research tends to be quantitative), it is possible to reduce the sensitization and keep the panel more balanced, especially over time.
Compensating for Bias
Market research can’t remove bias (see Five Common Myths About Bias and Market Research), but what it can do is compensate for bias, and even utilize the biases.
One example of the way that insight communities and bias are used proactively is with the testing of new concepts. Many community owners have found that communities tend to be more positive than the general population. So, any concept that is tested on the panel and which does badly can be disregarded, i.e. the community is used as a quantitative screener, or as a source of ideas for improvement. Benchmarking such tests allows the community owner to have some confidence in which ideas to proceed with. However, if volumetric estimates are needed, then a more representative sample is used to re-test the winning ideas.
Another way that research compensates for bias is to focus on relative measures, as opposed to absolute measures. Research published by communities’ platform Alida shows that the results from an insight community tend to be correlated with those from an independent sample, but can be higher or lower in absolute terms.
Another key point about compensating for bias relates to research trade-offs. For example, insight communities facilitate iterative, agile research. In an agile project, the questions tend to be simpler, and the findings that are needed are often directional. The project achieves its ends by working iteratively. By contrast, a traditional research project puts all of the research eggs in one basket, where the results of any errors or biases cannot be remedied.
In order to compensate for biases when using an insight community, the two key steps are:
1. Keep the biases constant (see the notes below on working with an insight community).
2. Be aware of the biases, for example by contrasting results with outcomes (real world, in-market outcomes) and comparing results with other studies (from other sources).
The need to keep biases constant is another relative advantage of an insight community compared with an access panel. With an insight community, you control who is in the community, where the samples are drawn from, and the sorts of projects that are run. When you use a third-party access panel you do not know who the people are, how they were recruited, and what sorts of other projects they have been subject to.
Key Tips For Working With An Insight Community
In order to recognize the biases, to hold them constant, and to factor them into your research, try to do the following:
1. Keep adding new people to the community on a regular basis, from as wide a range of sources as possible.
2. When conducting the more sensitizing types of research, such as focus groups, mystery shopping, mass anthropology, smartphone ethnography, try not to use the same willing members of the community each time, try to share it around.
3. Have a variety of initiatives to reward and engage members, aim to provide more than a newsletter and cash scheme. Examples of engagement include: better surveys, fun polls, feeding back the results of some studies to members, showcasing great members on the portal, and thinking laterally like announcing special offers, promoting videos etc. The more types of engagement you offer, the broader your base of active members is going to be.
4. Measure the impact of things like tenure (how long people have been on the panel) on responses and seek to keep the total picture of the panel constant. Measuring panel health, and acting on the results, is key to having balanced samples from your community.
5. Benchmark the community against the wider world. Run some of the community studies via online access panels or via CATI and build up an understanding of the differences. Is the community more cynical or more supportive? Is the community more open to new ideas or less open, and by how much?
6. Benchmark results in the community. At the end of the year you might have tested twenty ideas, ten ads, and six new store layouts. All new tests can be evaluated against these results. Build an internal reference for good, bad, average, and excellent scores.
The Only Danger is Ignoring Bias
Is face-to-face research biased? Yes. Is CATI biased? Yes. Are online access panels biased? Yes. Are insight communities biased? Yes?
Once we recognize the bias and we compensate for it, we can produce useful and insightful research.