Whether due to uneven respondent distribution or poor design (or both!), survey bias can create a headache for health care market researchers and the companies that rely on findings to create better products and services. Here, Thrivable shares strategies to help mitigate the four main types of bias. Part II offers specific tips on how to structure survey questions to help avoid bias.

Understanding survey bias and the challenges it creates

Survey bias, which includes a range of conditions that influence the responses received from participants, can lead to all sorts of issues for researchers. The causes are typically poor survey design (order bias) or uneven distribution among respondents (nonresponse, response, and sampling bias), either of which can result in the following: 

  • An increase in survey dropout. When respondents struggle through poorly designed surveys, they are less likely to complete them. More specifically, questions that don’t make sense or answer choices that fail to accurately represent the respondents’ experiences lead them to drop out mid-survey. 

  • Inaccurate data collection and analysis. Any design or distribution issues will undoubtedly affect accuracy when it comes to data collection and analysis. In addition to wasting the time and resources associated with survey execution, market researchers risk costly consequences if inaccurate data is used to make decisions about health care products and services. 

Strategies to avoid the most common types of survey bias

Of the four main types of survey bias, order bias happens during survey design and development. The remaining biases — nonresponse, response, and sampling — occur later in the process when respondents have been selected and are completing the survey. 

Order Bias

Avoiding the trappings of order bias requires close attention to the order of the questions being asked to ensure respondents aren’t influenced by a particular question as they answer subsequent questions. Common examples of order bias include:

  • Assimilation effect. In this instance, researchers need to think carefully about how related questions might affect the thought process of respondents. Consider the potential impact of asking respondents “How happy are you overall?” immediately followed by “How happy are you with your physical health?” It’s likely that a good portion of respondents will feel differently about their health status if they are first asked to think about their overall happiness.  

  • Contrast effect. A similar problem arises when respondents are asked “How much did you like health care app A?” and then following that question directly with “How much did you like health care app B?” Those who really liked, or disliked, health care app A may end up rating health care app B more, or less, critically simply because of how they felt about A. 

  • Primacy/recency effect. Providing too many options to choose from when answering a question can also influence which option respondents choose. Primacy happens when individuals consistently choose one of the first options provided, while recency occurs when individuals consistently choose one of the last options provided.  

Ways to avoid order bias

Fortunately, there are a number of things market researchers can do to reduce order bias. For starters, make sure that questions aren’t ordered in a way that “primes” them to answer questions based on what they’ve already been asked. Grouping survey items by topic can also help and, if possible, try to leave demographic questions until the end. Also, limit the number of scale or ratings questions you use in a survey, randomize your answer options, and keep response choices to a minimum. 

Nonresponse Bias

Nonresponse bias happens when those who completed your survey differ from those who did not complete your survey, even though the survey distribution was diverse. For example nonresponse bias is what happens when most respondents end up being female, even though there were no parameters placed on gender. 

Factors behind nonresponse bias can vary significantly, from people deciding not to participate because they aren’t interested in the topic to simply forgetting to complete the survey. Or, depending on how you distributed the survey, they might not have received the invitation to participate or lost it before completing the survey.

Ways to avoid nonresponse bias

There are a number of measures researchers can take to guard against nonresponse bias. In addition to careful survey design, try sending out pre-notifications and reminders to make sure all potential respondents are aware of the survey.

Reminding respondents that their responses will be kept confidential and that privacy practices are in place, especially those relevant to HIPAA, is also highly recommended. Other effective strategies include offering attractive incentives and keeping all surveys as short as you possibly can while still adequately addressing your research goals.

Response Bias

Response bias is more rooted in the behaviors of an individual respondent, which makes it a particular challenge although poor questionnaire design plays a role here, too. While the bias of the respondent’s answers may very well be unintentional, the outcome is that the answers supplied aren’t true. Among the different types of response bias:

  • Acquiescence bias. This is the bias that occurs when respondents tell you what they think you want to hear, even if it’s not really what they believe. It comes from our desire as humans to agree with one another and feel that we belong.

  • Demand characteristics. Simply being aware that they’re part of a study can create bias because respondents try to answer questions the way they think the researchers want them to answer. To illustrate, participants might avoid sharing their true opinions about a product or service because they’re afraid that they’ll face repercussions for expressing negative views. 

  • Desirability bias. Also referred to as social desirability or conformity bias, desirability bias is when respondents select answers that they wish applied to them but don’t. Doing so is a form of self-perseverance and a way to avoid reporting traits the participant considers undesirable. It often leads to exaggerated truths, such as someone saying he works out every day for 30 minutes when once a week is more accurate.  

  • Extreme response: This refers to the bias that happens when respondents only pick the “extreme” options of a Likert-scale, such as always selecting “strongly agree” or “strongly disagree.” Cultural or educational differences among respondents can be drivers of extreme behavior. (Conversely, some respondents may decide to always select “neutral.”) 

Ways to avoid response bias

Ensuring participants feel comfortable providing honest answers is the best approach to mitigating response bias. So, when this type of bias is a concern, try to rely on self-administered surveys and emphasize anonymity and confidentiality. If your survey is designed to gather insights regarding a specific health care product or service, make it a blind study and include open-ended questions in your survey as this can also be effective in reducing bias.

Sampling Bias

Often referred to in medical research as ascertainment bias, sampling bias happens when the sampling probability is higher or lower for a certain segment of the intended survey population. This creates skewed results and, if the misrepresentation is significant, challenges when trying to generalize results to the broader population.  

Self-selection bias is a common cause of sampling bias because this refers to sampling in which the people who are most passionate about the topic are the ones who respond. Undercoverage bias occurs when a segment of the intended population is left out, usually due to lack of access to the survey, while survivorship bias refers to the practice of only including “surviving” participants — such as current or long-standing customers instead of also including former customers, who are likely to have different opinions, in the study. 

Ways to avoid sampling bias

Researchers can reduce sampling bias by sending their survey through multiple channels (social media, email blasts, survey websites, messaging apps,  QR codes, text messaging, etc.). Use the most accessible platform to collect responses and keep the survey open for an appropriate length of time. And when surveying a population to gauge customer satisfaction, include exit surveys in the analysis to ensure you’ve gathered perspectives from both current and former customers. 

Avoiding the four most common types of survey bias is essential to accurate data collection and analysis. But it’s also just the beginning. Every single survey question must also be vetted from the perspective of bias. Check out Part II for tips to help you develop questions that will uncover the meaningful insights you’re after. 

About the author

Maria Muccioli, PhD

Maria Muccioli, PhD

Research Lead

Maria brings clinical research expertise to her work overseeing health care market research programs for Thrivable customers. She earned a PhD in Molecular and Cellular Biology from Ohio University and was also a postdoctoral researcher at the Ohio State University and a fellow at the Brigham and Women’s Hospital and Harvard Medical School.