In 2008, the voters of the United States elected our first African American president, Barack Obama. It may not surprise you to learn that when President Obama was coming of age in the 1970s, one-quarter of Americans reported they would not vote for a qualified African American presidential nominee. Three decades later, when President Obama ran for the presidency, fewer than 8% of Americans still held that position, and President Obama won the election (Smith, 2009). We know about these trends in voter opinion because the General Social Survey, a nationally representative survey of American adults, included questions about race and voting over the years described here. Without survey research, we may not know how Americans’ perspectives on race and the presidency shifted over these years.
Chapter Outline
- 7.1 Survey research: What is it and when should it be used?
- 7.2 Assessing survey research
- 7.3 Types of surveys
- 7.4 Designing effective questions and questionnaires
Content Advisory
This chapter discusses or mentions the following topics: physical and psychological abusive behaviors in dating and romantic relationships, racism, mental health, terrorism and 9/11, substance use, and sexism and ageism in the workplace.
7.1 Survey research: What is it and when should it be used?
Learning Objectives
- Define survey research
- Identify when it is appropriate to employ survey research as a data-collection strategy
Most of you have probably taken a survey at one time or another, so you probably have a pretty good idea of what a survey is. However, there is more to constructing a good survey than meets the eye. Survey design takes a great deal of thoughtful planning and often a great many rounds of revision. But it is worth the effort. As we’ll learn in this chapter, there are many benefits to choosing survey research as a method of data collection. We’ll take a look at what a survey is exactly, what some of the benefits and drawbacks of this method are, how to construct a survey, and what to do with survey data once it is collected.
Survey research is a quantitative method in which a researcher poses a set of predetermined questions to a sample of individuals. Survey research is an especially useful approach when a researcher aims to describe or explain features of a very large group or groups. This method may also be used as a way of quickly gaining some general details about a population of interest. In this case, a survey may help a researcher identify specific individuals or locations from which to collect additional data.
As is true of all methods of data collection, survey research is better suited to answering some kinds of research questions more than others. In addition, as you’ll recall from Chapter 5, operationalization works differently with different research methods. If your interest is in political activism, for example, you might operationalize that concept differently in a survey than you would for an experimental study of the same topic.
Spotlight on UTA SChool of social work
Diana Padilla-Medina conducts survey research
Dr. Diana Padilla-Medina, an assistant professor at the University of Texas at Arlington’s School of Social Work, and a team of researchers (Padilla-Medina, Rodríguez, & Vega, 2019; Padilla-Medina, Rodríguez, Vega & Williams, 2019) are conducting a cross-sectional survey with a sample of urban Puerto Rican adolescents living in Puerto Rico to study the behavioral factors that influence adolescents’ intention to use physical and psychological abusive behaviors in dating and romantic relationships. The study also explores how gender, development stage, and exposure to family violence influence both behavioral factors and intentions. A sample of 2000 adolescents between the ages of 13 and 17 years are being recruited from communities across five towns in Puerto Rico using area sampling techniques. Area sampling is a technique often used in survey research conducted in large geographical settings, such as towns and communities. When using this technique, the target population is divided into clusters, or geographic areas, and a random sample of the clusters are selected. Communities served as the geographic area from which adolescents were recruited. As with any research study involving human subjects, consent and assent was obtained from participants and their parents or caregivers.
An in-person survey in which an interviewer administers the survey is being administered to the adolescents in their homes. For this survey research, an interview schedule (i.e., survey instrument or questionnaire) was developed. When developing a survey instrument or questionnaire, it is important to consider the audience in order to choose the appropriate form to administer the survey. In this case, Dr. Padilla-Medina and her team, based on previous pilot qualitative and quantitative studies, determined that Puerto Ricans prefer to be interviewed in person, particularly when the study topic and questions are personal and sensitive. Additionally, when developing and/or administering a survey instrument it is important to pilot test the instrument to evaluate adequacy, psychometric properties, time, cost, adverse events, and improve upon the instrument prior to using it in a larger sample. For example, in their previous studies, the researchers used focus groups to learn about the adolescents’ perceptions about the readability and understandability of the instrument. In addition, the instrument was pilot tested to assess its psychometric properties and ensure it was ready for use in the larger survey research.
Finally, in any type of survey research, there is the potential for social desirability effects. This means that participants’ report an answer that he or she thinks would be desirable or acceptable by the interviewer, rather than their “true” answer. These behaviors could bias the study results. There are several social desirability measures that are used to reduce the possibility of these types of biases. Considering the sensitive nature of the current study topic and questions, the researchers used a social desirability measure to assess if the adolescents were or not concerned with social approval.
Key Takeaways
- Survey research is often used by researchers who wish to explain trends or features of large groups. It may also be used to assist those planning some more focused, in-depth study.
Glossary
- Survey research- a quantitative method whereby a researcher poses some set of predetermined questions to a sample
7.2 Assessing survey research
Learning Objectives
- Identify and explain the strengths of survey research
- Identify and explain the weaknesses of survey research
- Define response rate, and discuss some of the current thinking about response rates
Survey research, as with all methods of data collection, comes with both strengths and weaknesses. We’ll examine both in this section.
Strengths of survey methods
Researchers employing survey methods to collect data enjoy a number of benefits. First, surveys are an excellent way to gather lots of information from many people. Some methods of administering surveys can be cost effective. In a study of older people’s experiences in the workplace, researchers were able to mail a written questionnaire to around 500 people who lived throughout the state of Maine at a cost of just over $1,000. This cost included printing copies of a seven-page survey, printing a cover letter, addressing and stuffing envelopes, mailing the survey, and buying return postage for the survey. In some contexts, $,1,000 is a lot of money, but just imagine what it might have cost to visit each of those people individually to interview them in person. You would have to dedicate a few weeks of your life at least, drive around the state, and pay for meals and lodging to interview each person individually. We could double, triple, or even quadruple our costs pretty quickly by opting for an in-person method of data collection over a mailed survey.
Related to the benefit of cost-effectiveness is a survey’s potential for generalizability. Because surveys allow researchers to collect data from very large samples for a relatively low cost, survey methods lend themselves to probability sampling techniques, which we discussed in Chapter 6. Of all the data collection methods described in this textbook, survey research is probably the best method to use when one hopes to gain a representative picture of the attitudes and characteristics of a large group.
Survey research also tends to be a reliable method of inquiry. This is because surveys are standardized in that the same questions, phrased in exactly the same way, are posed to participants. Other methods, such as qualitative interviewing, which we’ll learn about in Chapter 9, do not offer the same consistency that a quantitative survey offers. This is not to say that all surveys are always reliable. A poorly phrased question can cause respondents to interpret its meaning differently, which can reduce that question’s reliability. Assuming well-constructed questions and survey design, one strength of this methodology is its potential to produce reliable results.
The versatility of survey research is also an asset. Surveys are used by all kinds of people in all kinds of professions. The versatility offered by survey research means that understanding how to construct and administer surveys is a useful skill to have for all kinds of jobs. Lawyers might use surveys in their efforts to select juries, social service and other organizations (e.g., churches, clubs, fundraising groups, activist groups) use them to evaluate the effectiveness of their efforts, businesses use them to learn how to market their products, governments use them to understand community opinions and needs, and politicians and media outlets use surveys to understand their constituencies.
In sum, the following are benefits of survey research:
- Cost-effectiveness
- Generalizability
- Reliability
- Versatility
Weaknesses of survey methods
As with all methods of data collection, survey research also comes with a few drawbacks. First, while one might argue that surveys are flexible in the sense that we can ask any number of questions on any number of topics in them, the fact that the survey researcher is generally stuck with a single instrument for collecting data, the questionnaire. Surveys are in many ways rather inflexible. Let’s say you mail a survey out to 1,000 people and then discover, as responses start coming in, that your phrasing on a particular question seems to be confusing a number of respondents. At this stage, it’s too late for a do-over or to change the question for the respondents who haven’t yet returned their surveys. When conducting in-depth interviews, on the other hand, a researcher can provide respondents further explanation if they’re confused by a question and can tweak their questions as they learn more about how respondents seem to understand them.
Depth can also be a problem with surveys. Survey questions are usually standardized; thus, it can be difficult to ask anything other than very general questions that a broad range of people will understand. Because of this, survey results may not be as valid as results obtained using methods of data collection that allow a researcher to more comprehensively examine whatever topic is being studied. Let’s say, for example, that you want to learn something about voters’ willingness to elect an African American president, as in our opening example in this chapter. General Social Survey respondents were asked, “If your party nominated an African American for president, would you vote for him if he were qualified for the job?” Respondents were then asked to respond either yes or no to the question. But what if someone’s opinion was more complex than could be answered with a simple yes or no? What if, for example, a person was willing to vote for a qualified African American but not if he chose a vice president the respondent didn’t like?
In sum, potential drawbacks to survey research include the following:
- Inflexibility
- Lack of depth
Response rates
The relative strength or weakness of an individual survey is strongly affected by the response rate, the percent of people invited to take the survey who actually complete it. Let’s say researcher sends a survey to 100 people. It would be wonderful if all 100 returned completed the questionnaire, but the chances of that happening are about zero. If the researcher is incredibly lucky, perhaps 75 or so will return completed questionnaires. In this case, the response rate would be 75%. The response rate is calculated by dividing the number of surveys returned by the number of surveys distributed.
Though response rates vary, and researchers don’t always agree about what makes a good response rate; having 75% of your surveys returned would be considered good—even excellent—by most survey researchers. There has been a lot of research done on how to improve a survey’s response rate. Suggestions include personalizing questionnaires by, for example, addressing them to specific respondents rather than to some generic recipient, such as “madam” or “sir”; enhancing the questionnaire’s credibility by providing details about the study, contact information for the researcher, and perhaps partnering with agencies likely to be respected by respondents such as universities, hospitals, or other relevant organizations; sending out pre-questionnaire notices and post-questionnaire reminders; and including some token of appreciation with mailed questionnaires even if small, such as a $1 bill.
The major concern with response rates is that a low rate of response may introduce nonresponse bias into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, we may well find that our findings don’t at all represent how things really are or, at the very least, we are limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown (Langer, 2003). Several studies have shown that low response rates did not make much difference in findings or in sample representativeness (Curtin, Presser, & Singer, 2000; Keeter, Kennedy, Dimock, Best, & Craighill, 2006; Merkle & Edelman, 2002). For now, the jury may still be out on what makes an ideal response rate and on whether, or to what extent, researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.
Key Takeaways
- Strengths of survey research include its cost effectiveness, generalizability, reliability, and versatility.
- Weaknesses of survey research include inflexibility and issues with depth.
- While survey researchers should always aim to obtain the highest response rate possible, some recent research argues that high return rates on surveys may be less important than we once thought.
Glossary
- Nonresponse bias- bias reflected differences between people who respond to your survey and those who do not respond
- Response rate- the number of people who respond to your survey divided by the number of people to whom the survey was distributed
7.3 Types of surveys
Learning Objectives
- Define cross-sectional surveys, provide an example of a cross-sectional survey, and outline some of the drawbacks of cross-sectional research
- Describe the three types of longitudinal surveys
- Describe retrospective surveys and identify their strengths and weaknesses
- Discuss the benefits and drawbacks of the various methods of administering surveys
There is immense variety when it comes to surveys. This variety comes both in terms of time—when or with what frequency a survey is administered—and in terms of administration—how a survey is delivered to respondents. In this section, we’ll look at what types of surveys exist when it comes to both time and administration.
Time
In terms of time, there are two main types of surveys: cross-sectional and longitudinal. Cross-sectional surveys are those that are administered at just one point in time. These surveys offer researchers a snapshot in time and offer an idea about how things are for the respondents at the particular point in time that the survey is administered.
An example of a cross-sectional survey comes from Aniko Kezdy and colleagues’ study (Kezdy, Martos, Boland, & Horvath-Szabo, 2011) of the association between religious attitudes, religious beliefs, and mental health among students in Hungary. These researchers administered a single, one-time-only, cross-sectional survey to a convenience sample of 403 high school and college students. The survey focused on how religious attitudes impact various aspects of one’s life and health. The researchers found from analysis of their cross-sectional data that anxiety and depression were highest among those who had both strong religious beliefs and some doubts about religion.
Yet another example of cross-sectional survey research can be seen in Bateman and colleagues’ study (Bateman, Pike, & Butler, 2011) of how the perceived publicness of social networking sites influences users’ self-disclosures. These researchers administered an online survey to undergraduate and graduate business students. They found that even though revealing information about oneself is viewed as key to realizing many of the benefits of social networking sites, respondents were less willing to disclose information about themselves as their perceptions of a social networking site’s publicness rose. That is, there was a negative relationship between perceived publicness of a social networking site and plans to self-disclose on the site.
One problem with cross-sectional surveys is that the events, opinions, behaviors, and other phenomena that such surveys are designed to assess don’t generally remain stagnant. They change over time. Thus, generalizing from a cross-sectional survey about the way things are can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long after you administered your survey. For example, think about how Americans might have responded to a survey asking their opinions on terrorism on September 10, 2001. Now imagine how responses to the same set of questions might differ were they administered on September 12, 2001. The point is not that cross-sectional surveys are useless; they have many important uses. But researchers must remember what they have captured by administering a cross-sectional survey—a snapshot of life as it was at the time that the survey was administered.
One way to overcome this sometimes problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys are those that enable a researcher to make observations over some extended period of time. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We’ll discuss all three types here, along with retrospective surveys. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.
The first type of longitudinal survey is called a trend survey. The main focus of a trend survey is, perhaps not surprisingly, trends. Researchers conducting trend surveys are interested in how people in a specific group change over time. Each time the researchers gather data, they ask different people from the group they are describing because their concern is the group, not the individual people they survey. Let’s look at an example.
The Monitoring the Future Study is a trend study that described the substance use of high school children in the United States. It’s conducted annually by the National Institute on Drug Abuse (NIDA). Each year, NIDA distributes surveys to students in high schools around the country to understand how substance use and abuse in that population changes over time. Recently, fewer high school students have reported using alcohol in the past month than at any point over the last 20 years. Recent data also reflect an increased use of e-cigarettes and the popularity of e-cigarettes with no nicotine over those with nicotine. The data points provide insight into targeting substance abuse prevention programs towards the current issues facing the high school population.
Unlike in a trend survey, in a panel survey the samepeople participate in the survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine trying to administer a survey to the same 100 people every year for, say, 5 years in a row. Keeping track of where people live, when they move, and when they die takes resources that researchers often don’t have. When they do, however, the results can be quite powerful. The Youth Development Study (YDS), administered from the University of Minnesota, offers an excellent example of a panel study.
Since 1988, YDS researchers have administered an annual survey to the same 1,000 people. Study participants were in ninth grade when the study began, and they are now in their thirties. Several hundred papers, articles, and books have been written using data from the YDS. One of the major lessons learned from this panel study is that work has a largely positive impact on young people (Mortimer, 2003). Contrary to popular beliefs about the impact of work on adolescents’ performance in school and transition to adulthood, work in fact increases confidence, enhances academic success, and prepares students for success in their future careers. Without this panel study, we may not be aware of the positive impact that working can have on young people.
Another type of longitudinal survey is a cohort survey. In a cohort survey, the participants have a defining age- or time-based characteristic that the researcher is interested in studying. Common cohorts that may be of interest to researchers include people of particular generations or those who were born around the same time period, graduating classes, people who began work in a given industry at the same time, or perhaps people who have some specific historical experience in common. In a cohort study, the same people don’t necessarily participate from year to year. But each year, participants must belong to the cohort of interest.
An example of this sort of research can be seen in Christine Percheski’s work (2008) on cohort differences in women’s employment. Percheski compared women’s employment rates across seven different generational cohorts, from Progressives born between 1906 and 1915 to Generation Xers born between 1966 and 1975. She found, among other patterns, that professional women’s labor force participation had increased across all cohorts. She also found that professional women with young children from Generation X had higher labor force participation rates than similar women from previous generations, concluding that mothers do not appear to be opting out of the workforce as some journalists have speculated (Belkin, 2003).
All three types of longitudinal surveys share the strength that they permit a researcher to make observations over time. This means that if whatever behavior or other phenomenon the researcher is interested in changes, either because of some world event or because people age, the researcher will be able to capture those changes. Table 7.1 summarizes these three types of longitudinal surveys.
Sample type | Description |
Trend | Researcher examines changes in trends over time; the same people do not necessarily participate in the survey more than once. |
Panel | Researcher surveys the exact same sample several times over a period of time. |
Cohort | Researcher identifies a defining cohort based on an age- or time-related characteristic and then regularly surveys people in the cohort |
Finally, retrospective surveys are similar to other longitudinal studies in that they deal with changes over time, but like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviors, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty. Imagine, for example, that you’re asked in a survey to respond to questions about where, how, and with whom you spent last Valentine’s Day. As last Valentine’s Day can’t have been more than 12 months ago, chances are good that you might be able to respond accurately to any survey questions about it. But now let’s say the researcher wants to know how last Valentine’s Day compares to previous Valentine’s Days, so she asks you to report on where, how, and with whom you spent the preceding six Valentine’s Days. How likely is it that you will remember? Will your responses be as accurate as they might have been had you been asked the question each year over the past 6 years, rather than asked to report on all years today?
In summary, when or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are certainly preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are really matters of all research design. But we’ve placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey administration deals with how surveys are administered. We’ll examine that next.
Administration
Surveys vary not just in terms of when they are administered but also in terms of how they are administered.
Self-administered questionnaires
One common way to administer surveys is in the form of self-administered questionnaires. This means that a research participant is given a set of questions, in writing, to which they are asked to respond. Self-administered questionnaires can be delivered in hard copy format, typically via mail, or increasingly more commonly, online. We’ll consider both modes of delivery here.
Hard copy self-administered questionnaires may be delivered to participants in person or via snail mail. Perhaps you’ve taken a survey that was given to you in person; on many college campuses, it is not uncommon for researchers to administer surveys in large social science classes (as you might recall from the discussion in our chapter on sampling). If you are ever asked to complete a survey in a large group setting, it might be interesting to note how your perspective on the survey and its questions could be shaped by the new knowledge you’re gaining about survey research in this chapter.
Researchers may also deliver surveys in person by going door-to-door and either asking people to fill them out right away or making arrangements for the researcher to return to pick up completed surveys. Though the advent of online survey tools has made door-to-door delivery of surveys less common, it still happens on occasion. This mode of gathering data is apparently still used by political campaign workers, at least in some areas of the country.
If you are not able to visit each member of your sample personally to deliver a survey, you might consider sending your survey through the mail. While this mode of delivery may not be ideal (imagine how much less likely you’d probably be to return a survey that didn’t come with the researcher standing on your doorstep waiting to take it from you), sometimes it is the only available or the most practical option. As mentioned, though, this may not be the most ideal way of administering a survey because it can be difficult to convince people to take the time to complete and return your survey.
Often survey researchers who deliver their surveys through the mail may provide some advance notice to respondents about the survey to get people thinking about and preparing to complete it. They may also follow up with their sample a few weeks after their survey has been sent out. This can be done not only to remind those who have not yet completed the survey to please do so but also to thank those who have already returned the survey. Most survey researchers agree that this sort of follow-up is essential for improving mailed surveys’ return rates (Babbie, 2010). Other helpful tools to increase response rate are to create an attractive and professional survey, offer monetary incentives, and provide a pre-addressed, stamped return envelope.
Online surveys are becoming increasingly common, no doubt because it is easy to use, relatively cheap, and may be quicker than knocking on doors or waiting for mailed surveys to be returned. To deliver a survey online, a researcher may subscribe to a service that offers online delivery or use some delivery mechanism that is available for free. Both SurveyMonkey and Qualtrics offer free and paid online survey services. One advantage to using services like these, aside from the advantages of online delivery already mentioned, is that results can be provided to you in formats that are readable by data analysis programs such as SPSS. This saves you, the researcher, the step of having to manually enter data into your analysis program, as you would if you administered your survey in hard copy format.
Many of the suggestions provided for improving the response rate on a hard copy questionnaire apply to online questionnaires as well. One difference of course is that the sort of incentives one can provide in an online format differ from those that can be given in person or sent through the mail. But this doesn’t mean that online survey researchers cannot offer completion incentives to their respondents. Sometimes they provide coupon codes for online retailers or the opportunity to provide contact information to participate in a raffle for a gift card or merchandise.
Online surveys, however, may not be accessible to individuals with limited, unreliable, or no access to the internet or less skill at using a computer. If those issues are common in your target population, online surveys may not work as well for your research study. While online surveys may be faster and cheaper than mailed surveys, mailed surveys are more likely to reach your entire sample but also more likely to be lost and not returned. The choice of which delivery mechanism is best depends on a number of factors, including your resources, the resources of your study participants, and the time you have available to distribute surveys and wait for responses. Understanding the characteristics of your study’s population is key to identifying the appropriate mechanism for delivering your survey.
Interviews
Sometimes surveys are administered by having a researcher poses questions verbally to respondents rather than having respondents read the questions on their own. Researchers using phone or in-person surveys use an interview schedule which contains the list of questions and answer options that the researcher will read to respondents. Consistency in the way that questions and answer options are presented is very important with an interview schedule. The aim is to pose every question-and-answer option in the very same way to every respondent. This is done to minimize interviewer effect, or possible changes in the way an interviewee responds based on how or when questions and answer options are presented by the interviewer. Survey interviews may be recorded, but because questions tend to be closed ended, taking notes during the interview is less disruptive than it can be during a qualitative interview.
Interview schedules are used in phone or in-person surveys and are also called quantitative interviews. In both cases, researchers pose questions verbally to participants. Phone surveys make it difficult to control the environment in which a person answers your survey. Another challenge comes from the increasing number of people who only have cell phones and do not use landlines (Pew Research, n.d.). Unlike landlines, cell phone numbers are portable across carriers, associated with individuals, not households, and do not change their first three numbers when people move to a new geographical area. However, computer-assisted telephone interviewing (CATI) programs have also been developed to assist quantitative survey researchers. These programs allow an interviewer to enter responses directly into a computer as they are provided, thus saving hours of time that would otherwise have to be spent entering data into an analysis program by hand.
Quantitative interviews must also be administered in such a way that the researcher asks the same question the same way each time. While questions on hard copy questionnaires may create an impression based on the way they are presented, having a person administer questions introduces a slew of additional variables that might influence a respondent. Even a slight shift in emphasis on a word may bias the respondent to answer differently. Consistency is key with quantitative data collection—and human beings are not necessarily known for their consistency. Quantitative interviews can also help reduce a respondent’s confusion. If a respondent is unsure about the meaning of a question or answer option on a self-administered questionnaire, they probably won’t have the opportunity to get clarification from the researcher. An interview, on the other hand, gives the researcher an opportunity to clarify or explain any items that may be confusing. If a participant asks for clarification, the researcher must use pre-determined responses to make sure each quantitative interview is exactly the same as the others.
In-person surveys are conducted in the same way as phone surveys but must also account for non-verbal expressions and behaviors. In-person surveys have one distinct benefit—they are more difficult to say “no” to. Because the participant is already in the room and sitting across from the researcher, they are less likely to decline than if they clicked “delete” for an emailed online survey or pressed “hang up” during a phone survey. In-person surveys are also much more time consuming and expensive than mailing questionnaires. Thus, quantitative researchers may opt for self-administered questionnaires over in-person surveys on the grounds that they will be able to reach a large sample at a much lower cost than were they to interact personally with each and every respondent.
Table 7.2 summarizes the various ways to collect survey data.
Table 7.2
Self-administered | given in a large group setting |
delivered in person | |
sent through the mail | |
online | |
Interviews | in-person |
telephone |
Key Takeaways
- Time is a factor in determining what type of survey researcher administers; cross-sectional surveys are administered at one time, and longitudinal surveys are administered over time.
- Retrospective surveys offer some of the benefits of longitudinal research but also come with their own drawbacks.
- Self-administered questionnaires may be delivered in hard copy form to participants in person or via mail or online.
- Interview schedules are used in in-person or phone surveys.
- Each method of survey administration comes with benefits and drawbacks.
Glossary
- Cohort survey- describes how people with a defining characteristic change over time
- Cross-sectional surveys- surveys that are administered at just one point in time
- Interview schedules- a researcher poses questions verbally to respondents
- Longitudinal surveys- surveys in which a researcher to make observations over some extended period of time
- Panel survey- describes how people in a specific group change over time, asking the same people each time the survey is administered
- Retrospective surveys- describe changes over time but are administered only once
- Self-administered questionnaires- a research participant is given a set of questions, in writing, to which they are asked to respond
- Trend survey- describes how people in a specific group change over time, asking different people each time the survey is administered
7.4 Designing effective questions and questionnaires
Learning Objectives
- Identify the steps one should take to write effective survey questions
- Describe some of the ways that survey questions might confuse respondents and how to overcome that possibility
- Apply mutual exclusivity and exhaustiveness to writing closed-ended questions
- Define fence-sitting and floating
- Describe the steps involved in constructing a well-designed questionnaire
- Discuss why piloting a questionnaire is important
Up to this point, we’ve considered several general points about surveys, including when to use them, some of their strengths and weaknesses, and how often and in what ways to administer surveys. In this section, we’ll get more specific and take a look at how to pose understandable questions that will yield useable data and how to present those questions on a questionnaire.
Asking effective questions
The first thing you need to do to write effective survey questions is identify what exactly you wish to know. Perhaps surprisingly, it is easy to forget to include important questions when designing a survey. Begin by looking at your research question. Perhaps you wish to identify the factors that contribute to students’ ability to transition from high school to college. To understand which factors shaped successful students’ transitions to college, you’ll need to include questions in your survey about all the possible factors that could contribute. How do you know what to ask? Consulting the literature on the topic will certainly help, but you should also take the time to do some brainstorming on your own and to talk with others about what they think may be important in the transition to college. Time and space limitations won’t allow you to include every single item you’ve come up with, so you’ll also need to think about ranking your questions so that you can be sure to include those that you view as most important. In your study, think back to your work on operationalization. How did you plan to measure your variables? If you planned to ask specific questions or use a scale, those should be in your survey.
We’ve discussed including questions on all topics you view as important to your overall research question, but you don’t want to take an everything-but-the-kitchen-sink approach by uncritically including every possible question that occurs to you. Doing so puts an unnecessary burden on your survey respondents. Remember that you have asked your respondents to give you their time and attention and to take care in responding to your questions; show them your respect by only asking questions that you view as important.
Once you’ve identified all the topics about which you’d like to ask questions, you’ll need to actually write those questions. Questions should be as clear and to the point as possible. This is not the time to show off your creative writing skills; a survey is a technical instrument and should be written in a way that is as direct and concise as possible. To reiterate, survey respondents have agreed to give their time and attention to your survey. The best way to show your appreciation for their time is to not waste it. Ensuring that your questions are clear and concise will go a long way toward showing your respondents the gratitude they deserve.
Related to the point about not wasting respondents’ time, make sure that every question you pose will be relevant to every person you ask to complete it. This means two things: first, that respondents have knowledge about whatever topic you are asking them about, and second, that respondents have experience with whatever events, behaviors, or feelings you are asking them to report. You probably wouldn’t want to ask a sample of 18-year-old respondents, for example, how they would have advised President Reagan to proceed when news of the United States’ sale of weapons to Iran broke in the mid-1980s. For one thing, few 18-year-olds are likely to have any clue about how to advise a president. Furthermore, the 18-year-olds of today were not even alive during Reagan’s presidency, so they have had no experience with Iran-Contra affair about which they are being questioned. In our example of the transition to college, heeding the criterion of relevance would mean that respondents must understand what exactly you mean by “transition to college” if you are going to use that phrase in your survey and that respondents must have actually experienced the transition to college themselves.
If you decide that you do wish to pose some questions about matters with which only a portion of respondents will have had experience, it may be appropriate to introduce a filter question into your survey. A filter question is designed to identify some subset of survey respondents who are asked additional questions that are not relevant to the entire sample. Perhaps in your survey on the transition to college you want to know whether substance use plays any role in students’ transitions. You may ask students how often they drank during their first semester of college. But this assumes that all students drank. Certainly, some may have abstained from using alcohol, and it wouldn’t make any sense to ask the nondrinkers how often they drank. Nevertheless, it seems reasonable that drinking frequency may have an impact on someone’s transition to college, so it is probably worth asking this question even if doing means the question will not be relevant for some respondents. This is just the sort of instance when a filter question would be appropriate. With a filter question such as question # 10 in Figure 7.1, you can filter out respondents who have not had alcohol from answering questions about their alcohol use.
There are some ways of asking questions that are bound to confuse many survey respondents. Survey researchers should take great care to avoid these kinds of questions. These include questions that pose double negatives, those that use confusing or culturally specific terms, and those that ask more than one question within a single question. Any time respondents are forced to decipher questions that use double negatives, confusion is bound to ensue. Taking the previous question about drinking as our example, what if we had instead asked, “Did you not abstain from drinking during your first semester of college?” This example is obvious, but hopefully it drives home the point to be careful about question wording so that respondents are not asked to decipher double negatives. In general, avoiding negative terms in your question wording will help to increase respondent understanding.
You should also avoid using terms or phrases that may be regionally or culturally specific (unless you are absolutely certain all your respondents come from the region or culture whose terms you are using). A similar issue arises when you use jargon, or technical language, that people do not commonly know. For example, if you asked adolescents how they experience imaginary audience, they likely would not be able to link that term to the concepts from David Elkind’s theory. Instead, you would need to break down that term into language that is easier to understand and common to adolescents.
Asking multiple questions as though they are a single question can also confuse survey respondents. There’s a specific term for this sort of question; it is called a double-barreled question. Using our example of the transition to college, Figure 7.2 shows a double-barreled question.
Do you see what makes the question double-barreled? How would someone respond if they felt their college classes were more demanding but also less interesting than their high school classes? Or less demanding but more interesting? Because the question combines “demanding” and “interesting,” there is no way to respond yes to one criterion but no to the other.
Another thing to avoid when constructing survey questions is the problem of social desirability. We all want to look good, right? And we all probably know the politically correct response to a variety of questions whether we agree with the politically correct response or not. In survey research, social desirability refers to the idea that respondents will try to answer questions in a way that will present them in a favorable light. (You may recall we covered social desirability bias in Chapter 5.) Let’s go back to our example about transitioning to college to explore this concept further.
Perhaps we decide that to understand the transition to college, we need to know whether respondents ever cheated on an exam in high school or college. Cheating on exams is generally frowned upon. So it may be difficult to get people taking a survey to admit to cheating on an exam. But if you could guarantee respondents’ confidentiality, or even better, their anonymity, chances are much better that they will be honest about having engaged in this socially undesirable behavior. Another way to avoid problems of social desirability is to try to phrase difficult questions in the most benign way possible. Earl Babbie (2010) offers a useful suggestion for helping you do this—simply imagine how you would feel responding to your survey questions. If you would be uncomfortable, chances are others would as well.
Finally, it is important to get feedback on your survey questions from as many people as possible, especially people who are like those in your sample. Now is not the time to be shy. Ask your friends for help, ask your mentors for feedback, ask your family to take a look at your survey as well. The more feedback you can get on your survey questions, the better the chances that you will come up with a set of questions that are understandable to a wide variety of people and, most importantly, to those in your sample.
In sum, in order to pose effective survey questions, researchers should do the following:
- Identify what it is they wish to know.
- Keep questions clear and succinct.
- Make questions relevant to respondents.
- Use filter questions when necessary.
- Avoid questions that are likely to confuse respondents—including those that use double negatives, use culturally specific terms or jargon, or pose more than one question at a time.
- Imagine how respondents would feel responding to questions.
- Get feedback, especially from people who resemble those in the researcher’s sample.
Response options
While posing clear and understandable questions in your survey is certainly important, so too is providing respondents with unambiguous response options. Response options are the answers that you provide to the people taking your survey. Generally, respondents will be asked to choose a single (or best) response to each question you pose, though certainly it makes sense in some cases to instruct respondents to choose multiple response options. One caution to keep in mind when accepting multiple responses to a single question, however, is that doing so may add complexity when it comes to tallying and analyzing your survey results.
Offering response options assumes that your questions will be closed-ended questions. In a quantitative written survey, which is the type of survey we’ve been discussing here, chances are good that most, if not all, your questions will be closed-ended. This means that you, the researcher, will provide respondents with a limited set of options for their responses. To write an effective closed-ended question, there are a couple of guidelines worth following. First, be sure that your response options are mutually exclusive. Look back at Figure 7.1, which contains questions about how often and how many drinks respondents consumed. Do you notice that there are no overlapping categories in the response options for these questions? This is another one of those points about question construction that seems fairly obvious but that can be easily overlooked. Response options should also be exhaustive. In other words, every possible response should be covered in the set of response options that you provide. For example, note that in question 10a in Figure 7.1, we have covered all possibilities—those who drank, say, an average of once per month can choose the first response option (“less than one time per week”) while those who drank multiple times a day each day of the week can choose the last response option (“7+”). All the possibilities in between these two extremes are covered by the middle three response options.
Surveys need not be limited to closed-ended questions. Sometimes survey researchers include open-ended questions in their survey instruments as a way to gather additional details from respondents. An open-ended question does not include response options; instead, respondents are asked to reply to the question in their own way, using their own words. These questions are generally used to find out more about a survey participant’s experiences or feelings about whatever they are being asked to report in the survey. If, for example, a survey includes closed-ended questions asking respondents to report on their involvement in extracurricular activities during college, an open-ended question could ask respondents why they participated in those activities or what they gained from their participation. While responses to such questions may also be captured using a closed-ended format, allowing participants to share some of their responses in their own words can make the experience of completing the survey more satisfying to respondents and can also reveal new motivations or explanations that had not occurred to the researcher.
Earlier in this section, we discussed double-barreled questions, but response options can also be double barreled, and this should be avoided. Figure 7.3 provides an example of a question that uses double-barreled response options.
Other things to avoid when it comes to response options include fence-sitting and floating. Fence-sitters are respondents who choose neutral response options, even if they have an opinion. This can occur if respondents are given, say, five rank-ordered response options, such as strongly agree, agree, no opinion, disagree, and strongly disagree. You’ll remember this is called a Likert scale. Some people will be drawn to respond, “no opinion” even if they have an opinion, particularly if their true opinion is the not a socially desirable opinion. Floaters, on the other hand, are those that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion. If a respondent is only given four rank-ordered response options, such as strongly agree, agree, disagree, and strongly disagree, those who have no opinion have no choice but to select a response that suggests they have an opinion.
As you can see, floating is the flip side of fence-sitting. Thus, the solution to one problem is often the cause of the other. How you decide which approach to take depends on the goals of your research. Sometimes researchers specifically want to learn something about people who claim to have no opinion. In this case, allowing for fence-sitting would be necessary. Other times researchers feel confident their respondents will all be familiar with every topic in their survey. In this case, perhaps it is okay to force respondents to choose an opinion. Other times, researchers can provide a scale with anchors at either end and ask the respondent to indicate where there answer fits between the two anchors. An example would be a question that says, “On a scale from 0 to 10 where 0 is completely disagree and 10 is completely agree, what number would indicate your level of agreement?” There is no always-correct solution to either problem.
Finally, using a matrix is a nice way of streamlining response options. A matrix is a question type that that lists a set of questions for which the answer categories are all the same. If you have a set of questions for which the response options are the same, it may make sense to create a matrix rather than posing each question and its response options individually. Not only will this save you some space in your survey but it will also help respondents progress through your survey more easily. A sample matrix can be seen in Figure 7.4.
Using Standardized instruments
You may be thinking writing good survey questions and clear responses is a complicated task with a lot of pitfalls. In many ways it is! The good news is that for many of the constructs you would like to measure, other researchers have already designed and tested survey questions. You may remember from from Chapter 5 that there are scales, indices, and typologies to measure variables. Many of these instruments have already demonstrated reliability and validity. If there are validated instruments available, it is always advisable to use them rather than to write your own survey questions. Not only do you save time and effort, but you can have a fair amount of confidence that the validated instruments will avoid many of the question-writing pitfalls discussed above.
Designing questionnaires
In addition to constructing quality questions and posing clear response options, you’ll also need to think about how to present your written questions and response options to survey respondents. Questions are presented on a questionnaire, which is the document (either hard copy or online) that contains all your survey questions for respondents to read and answer. Designing questionnaires takes some thought.
One of the first things to do once you’ve come up with a set of survey questions you feel confident about is to group those questions thematically. In our example of the transition to college, perhaps we’d have a few questions asking about study habits, others focused on friendships, and still others on exercise and eating habits. Those may be the themes around which we organize our questions. Or perhaps it would make more sense to present any questions we had about pre-college life and then present a series of questions about life after beginning college. The point here is to be deliberate about how you present your questions to respondents.
Once you have grouped similar questions together, you’ll need to think about the order in which to present those question groups. Most survey researchers agree that it is best to begin a survey with questions that will make respondents want to continue (Babbie, 2010; Dillman, 2000; Neuman, 2003). In other words, don’t bore respondents, but don’t scare them away either. There’s some disagreement over where on a survey to place demographic questions, such as those about a person’s age, gender, and race. On the one hand, placing them at the beginning of the questionnaire may lead respondents to think the survey is boring, unimportant, and not something they want to bother completing. But these are important pieces of data and you don’t want your participant to quit the survey without providing their demographic information. Another thing to consider if the placement of sensitive or difficult topics, such as child sexual abuse or other criminal activity. You don’t want to scare respondents away or shock them by beginning with your most intrusive questions.
In truth, the order in which you present questions on a survey is best determined by the unique characteristics of your research—only you, the researcher, hopefully in consultation with people who are willing to provide you with feedback, can determine how best to order your questions. To do so, think about the unique characteristics of your topic, your questions, and most importantly, your sample. Keeping in mind the characteristics and needs of the people you will ask to complete your survey should help guide you as you determine the most appropriate order in which to present your questions.
You’ll also need to consider the time it will take respondents to complete your questionnaire. Surveys vary in length, from just a page or two to a dozen or more pages, which means they also vary in the time it takes to complete them. How long to make your survey depends on several factors. First, what is it that you wish to know? Wanting to understand how grades vary by gender and year in school certainly requires fewer questions than wanting to know how people’s experiences in college are shaped by demographic characteristics, college attended, housing situation, family background, college major, friendship networks, and extracurricular activities. Keep in mind that even if your research question requires a sizable number of questions be included in your questionnaire, do your best to keep the questionnaire as brief as possible. Any hint that you’ve thrown in a bunch of useless questions just for the sake of it will turn off respondents and may make them not want to complete your survey.
Second, and perhaps more important, is the length of time respondents are likely to be willing to spend completing the questionnaire. If you are studying college students, asking them to use their precious fun time away from studying to complete your survey may mean they won’t want to spend more than a few minutes on it. But if you have the endorsement of a professor who is willing to allow you to administer your survey in class, students may be willing to give you a little more time (though perhaps the professor will not). The time that survey researchers ask respondents to spend on questionnaires varies greatly. Some researchers advise that surveys should not take longer than about 15 minutes to complete (as cited in Babbie 2010), whereas others suggest that up to 20 minutes is acceptable (Hopper, 2012). As with question order, there is no clear-cut, always-correct answer about questionnaire length. The unique characteristics of your study and your sample should be considered to determine how long to make your questionnaire.
A good way to estimate the time it will take respondents to complete your questionnaire is through piloting the questionnaire. Piloting allows you to get feedback on your questionnaire so you can improve it before you actually administer it. Piloting can be quite expensive and time consuming if you wish to test your questionnaire on a large sample of people who very much resemble the sample to whom you will eventually administer the finalized version of your questionnaire. But you can learn a lot and make great improvements to your questionnaire simply by pretesting with a small number of people to whom you have easy access (perhaps you have a few friends who owe you a favor). By piloting your questionnaire, you can find out how understandable your questions are, get feedback on question wording and order, find out whether any of your questions are boring or offensive, and learn whether there are places where you should have included filter questions. You can also time respondents as they take your survey. This will give you a good idea about the estimate to provide when you administer your survey for your study and whether you have some wiggle room to add additional items or need to cut a few items.
Perhaps this goes without saying, but your questionnaire should also have an attractive design. A messy presentation style can confuse respondents or, at the very least, annoy them. Be brief, to the point, and as clear as possible. Avoid cramming too much into a single page. Make your font size readable (at least 12 point or larger, depending on the characteristics of your sample), leave a reasonable amount of space between items, and make sure all instructions are exceptionally clear. Think about books, documents, articles, or web pages that you have read yourself—which were relatively easy to read and easy on the eyes and why? Try to mimic those features in the presentation of your survey questions.
Key Takeaways
- Brainstorming and consulting the literature are two important early steps to take when preparing to write effective survey questions.
- Make sure your survey questions will be relevant to all respondents and that you use filter questions when necessary.
- Getting feedback on your survey questions is a crucial step in the process of designing a survey.
- When it comes to creating response options, the solution to the problem of fence-sitting might cause floating, whereas the solution to the problem of floating might cause fence sitting.
- Piloting is an important step for improving a survey before actually administering it.
Glossary
- Closed-ended questions- questions for which the researcher offers response options
- Double-barreled question- a question that asks two different questions at the same time, making it difficult to respond accurately
- Fence-sitters- respondents who choose neutral response options, even if they have an opinion
- Filter question- question that identifies some subset of survey respondents who are asked additional questions that are not relevant to the entire sample
- Floaters- respondents that choose a substantive answer to a question when really, they don’t understand the question or don’t have an opinion
- Matrix question- lists a set of questions for which the answer categories are all the same
- Open-ended questions- questions for which the researcher does not include response options