"

7 Survey Data and Question Design

Learning Objectives for Chapter

  • Identify the essential components of surveys.
  • Describe the benefits of surveys and their potential drawbacks.
  • Evaluate the steps one should take in order to write effective survey questions and answers.
  • Recognize the basic components of quantitative data analysis.

Introduction

Surveys are a significant method frequently encountered in communication research. In this chapter, we will delve into the basic principles, uses, benefits, and drawbacks of surveys. By understanding these aspects, you will gain a clearer understanding of how surveys play a role in communication studies. This knowledge will empower you to analyse research studies that utilise surveys, whether you’re a media professional or someone who critically evaluates research findings.

Survey Research: What Is It and When Should It Be Used?

A survey is a methodical approach to collecting data, opinions, attitudes, or behaviours from a targeted group of individuals, typically through the administration of structured questionnaires, interviews, or online forms. This method is a powerful tool for researchers to systematically gather information that can shed light on a variety of topics across various disciplines.

The process of conducting a survey involves formulating a set of questions designed to elicit specific responses relevant to the research objectives. These questions can cover a diverse range of subjects, such as personal preferences, beliefs, experiences, behaviours, or demographic information. By employing a standardised format, surveys ensure consistency in data collection, making it easier to analyse and interpret the results.

One of the primary reasons for the widespread use of surveys is their capacity to provide quantitative data. This data can be subjected to statistical analysis, enabling researchers to identify trends, correlations, and patterns within the responses. As a result, surveys offer a valuable means of quantifying and measuring phenomena that might otherwise be challenging to assess numerically.

Moreover, surveys are particularly valuable when researchers seek to understand the attitudes and perspectives of a larger population. Through careful sampling techniques, a relatively small group of participants, known as a sample, can be selected to represent the broader target population. This allows researchers to make inferences about the entire population based on the sample’s responses.

The versatility of surveys extends to their applications across numerous fields. In business and marketing, surveys help organisations gauge customer satisfaction, gather feedback on products and services, and identify areas for improvement. In social and political sciences, surveys are pivotal in measuring public opinion, tracking societal trends, and informing policy decisions. Educational researchers use surveys to assess student performance, evaluate teaching methodologies, and enhance learning environments. Additionally, health professionals employ surveys to study patient preferences, assess healthcare outcomes, and inform medical interventions. More detail will be given regarding their specific use in communication studies in the sections that follow.

What are the Different Types of Surveys that are Common?

Surveys come in many varieties in terms of both time—when or with what frequency a survey is administered—and administration—how a survey is delivered to respondents. This section will examine types of surveys that exist in terms of both time and administration.

With regards to time, there are two main types of surveys: cross-sectional and longitudinal. Cross- sectional surveys are those that are administered at just one point in time. These surveys offer researchers a sort of snapshot in time and give you an idea about how things are for your respondents at the particular point in time that the survey is administered. One problem with cross-sectional surveys is that the events, opinions, behaviours, and other phenomena that such surveys are designed to assess do not generally remain stagnant. Therefore, generalising from a cross-sectional survey can be tricky; perhaps you can say something about the way things were in the moment that you administered your survey, but it is difficult to know whether things remained that way for long afterwards. Cross-sectional surveys have many important uses; however, researchers must remember what they have captured by administering a cross-sectional survey: a snapshot of life at the time the survey was administered.

One way to overcome this occasional problematic aspect of cross-sectional surveys is to administer a longitudinal survey. Longitudinal surveys enable a researcher to make observations over an extended period. There are several types of longitudinal surveys, including trend, panel, and cohort surveys. We will discuss all three types here, along with another type of survey called retrospective. Retrospective surveys fall somewhere in between cross-sectional and longitudinal surveys.

The first type of longitudinal survey is called a trend survey. Researchers conducting trend surveys are interested in how people’s inclinations change over time, i.e., trends. The Gallup opinion polls are an excellent example of trend surveys. To learn about how public opinion changes over time, Gallup administers the same questions to people at different times.

The second type of longitudinal study is called a panel survey. Unlike in a trend survey, the same people participate in a panel survey each time it is administered. As you might imagine, panel studies can be difficult and costly. Imagine administering a survey to the same 100 people every year for five years in a row. Keeping track of where people live, when they move, and when they die, takes resources that researchers often do not have. When those resources are available, however, the results can be quite powerful.

Another type of longitudinal survey is a cohort survey. In a cohort survey, a researcher identifies some category of people who are of interest and then regularly surveys people who fall into that category. The same people do not necessarily participate from year to year, but all participants must meet whatever categorical criteria that fulfils the researcher’s primary interest. Common cohorts that may be of interest to researchers include: people of particular generations or those who were born around the same time period; graduating classes; people who began work in a given industry at the same time; or perhaps people who have some specific life experience in common.

All three types of longitudinal surveys permit a researcher to make observations over time. This means that if the behaviour or other phenomenon that interests the researcher changes, either because of some world event or because people age, the researcher will be able to capture those changes.

Finally, retrospective surveys are similar to other longitudinal studies in that they deal with changes over time but, like a cross-sectional study, they are administered only once. In a retrospective survey, participants are asked to report events from the past. By having respondents report past behaviours, beliefs, or experiences, researchers are able to gather longitudinal-like data without actually incurring the time or expense of a longitudinal survey. Of course, this benefit must be weighed against the possibility that people’s recollections of their pasts may be faulty.

When or with what frequency a survey is administered will determine whether your survey is cross-sectional or longitudinal. While longitudinal surveys are preferable in terms of their ability to track changes over time, the time and cost required to administer a longitudinal survey can be prohibitive. As you may have guessed, the issues of time described here are not necessarily unique to survey research. Other methods of data collection can be cross-sectional or longitudinal—these are really issues of research design. We have placed our discussion of these terms here because they are most commonly used by survey researchers to describe the type of survey administered. Another aspect of survey administration deals with how surveys are administered, and we will examine that next.

Administering Surveys

There are several methods for administering surveys, each with its own advantages and drawbacks. The choice of survey administration method depends on factors such as the research objectives, target population, resources available, and the desired level of participant engagement. Below are some common ways to administer surveys, along with their pros and cons.

Online Surveys

Online surveys are administered through digital platforms, such as web-based forms or survey software. They offer convenience and accessibility, allowing participants to complete surveys at their own pace and from various locations. Online surveys can reach a broad audience quickly and are cost-effective. However, they may exclude individuals with limited internet access, and response rates can vary. Additionally, participants might rush through the survey or provide inaccurate responses.

Paper-and-Pencil Surveys

Paper surveys involve distributing printed questionnaires to participants who complete and return them manually. This method can be suitable for populations with limited internet connectivity or familiarity with digital devices. Paper surveys provide a tangible format that some participants may find more comfortable. However, data entry and analysis can be time-consuming, and data quality might suffer from errors or missing information.

Telephone Surveys

Telephone surveys involve trained interviewers contacting participants by phone and conducting the survey verbally. They offer a personal touch and can clarify questions for participants in real-time. Telephone surveys are suitable for populations without internet access and may yield higher response rates compared to online methods. However, they can be labour-intensive, expensive, and participants may be less willing to engage in lengthy phone interviews.

Face-to-Face Surveys

In face-to-face surveys, interviewers administer the survey in person, often using paper questionnaires or electronic devices. This method allows for clarification of questions and can yield higher completion rates. It is suitable for gathering detailed information and reaching diverse populations. However, face-to-face surveys are time-consuming, costly, and may introduce interviewer bias, potentially influencing participant responses.

Mixed-Mode Surveys

Mixed-mode surveys combine two or more administration methods to enhance reach and data collection. For example, participants could start with an online survey and complete a follow-up interview by phone. Mixed-mode surveys capitalise on the strengths of each method while mitigating their weaknesses. However, coordinating multiple modes can be complex, and data comparability may be affected.

In conclusion, the method chosen for survey administration depends on various factors, including target population characteristics, research goals, resources, and data quality considerations. Online surveys offer accessibility but may lack inclusivity. Paper-and-pencil surveys provide a tangible option but require manual data entry. Telephone and face-to-face surveys offer personal interaction but can be resource-intensive. Mixed-mode surveys combine methods to optimise reach and data collection. Researchers should carefully weigh these pros and cons when selecting an administration approach to ensure the effectiveness and validity of their survey-based communication research.

When are Surveys Used in Communication Research?

Surveys are frequently employed in communication research across a variety of contexts to gain insights into people’s opinions, behaviours, and attitudes related to communication processes and media consumption. Here are some common scenarios where surveys are used in communication research:

  • Media Consumption and Preferences: Surveys are used to understand how individuals consume different types of media, such as television, radio, social media, and print. Researchers explore preferences, frequency of use, and the impact of various media on individuals’ lives.
  • Audience Analysis: Communication researchers use surveys to analyse audience demographics, interests, and preferences. This information helps media organisations tailor content and messages to specific target groups.
  • Media Effects: Surveys assess how exposure to different media messages influences attitudes, beliefs, and behaviours. Researchers investigate the impact of media on topics like body image, political opinions, and consumer behaviour.
  • Advertising and Marketing Research: Surveys are crucial for assessing the effectiveness of advertising campaigns, measuring brand awareness, and understanding consumer perceptions and purchasing behaviours.
  • Public Opinion and Social Issues: Communication scholars use surveys to gauge public opinion on social and political issues. This data informs debates, policy decisions, and advocacy efforts.
  • Communication Campaign Evaluation: Surveys help assess the success of communication campaigns, whether they are related to public health, social awareness, or behavioural change. Researchers measure campaign reach, message recall, and behaviour change among target audiences.
  • Educational Research: Communication scholars use surveys to study student engagement, classroom dynamics, and the effectiveness of teaching methods in communication-related courses.
  • Media Literacy and Digital Communication: Surveys are used to assess individuals’ media literacy levels, online behaviour, and attitudes toward technology and digital communication platforms.
  • Social Media Studies: Researchers utilise surveys to explore social media usage patterns, the impact of online communication on relationships, and perceptions of online privacy.
  • Organisational Communication: Surveys are employed to analyse employee communication satisfaction, organisational culture, and communication effectiveness within workplaces.
  • Entertainment Research: Surveys help researchers understand the appeal of various forms of entertainment, such as films, music, video games, and online content.

In essence, surveys are a versatile tool in communication research, providing quantitative data that support the understanding of communication dynamics, media effects, audience behaviours, and societal trends. Researchers use surveys to explore a wide range of communication-related phenomena and contribute to the advancement of communication theory and practice.

Pros and Cons of Survey Research

Surveys are a commonly employed method in communication research, offering valuable insights into individuals’ attitudes, behaviours, and perceptions within various communication contexts. However, like any research approach, surveys possess both strengths and weaknesses that researchers must consider when employing them in the study of communication phenomena.

Strengths of Surveys in Communication Research

Surveys allow researchers to gather data from a large number of participants relatively quickly. For instance, in a study examining media preferences, a survey can efficiently collect responses from hundreds or even thousands of individuals.

Moreover, surveys generate quantitative data that can be subjected to statistical analysis. For example, a survey about political attitudes can yield numerical data on the percentage of respondents supporting different political parties.

In addition, well-designed surveys with representative samples can provide insights that apply to a larger population. For instance, a survey about smartphone usage habits in a certain country can offer insights into broader trends within that population.

Surveys also allow for comparisons between different groups or across different time periods; a survey about television viewing habits can reveal differences between age groups or changes in viewing patterns over the years.

Finally, surveys minimise interviewer bias and can help ensure consistent data collection, contributing to the reliability of results. A survey asking participants about their perceptions of media bias can avoid potential interviewer influence on responses.

Weaknesses of Surveys in Communication Research

Participants might provide socially desirable answers or alter their responses based on the context, leading to inaccurate data. For example, participants may overstate their engagement with educational content to appear more diligent.

Surveys may struggle to capture the depth and nuances of communication experiences. In a survey about interpersonal communication, respondents may not be able to fully convey the emotional subtleties of a conversation.

The wording of survey questions can influence participant responses. Poorly worded questions can lead to confusion or misinterpretation. For instance, a question about “television viewing” without specifying streaming services might exclude relevant data.

If the sample does not represent the target population, findings may lack generalizability. For instance, if a survey on social media habits is conducted only among college students, the results may not accurately reflect the broader population.

Low participation rates can introduce selection bias and affect the reliability of results. In a survey about media trust, a low response rate may lead to skewed perceptions of media credibility.

In summary, surveys offer efficient data collection and quantitative analysis capabilities, enabling researchers to explore communication phenomena across various contexts. However, potential response biases, limitations in capturing qualitative nuances, question wording effects, sample bias, and low response rates necessitate careful consideration and methodological rigour when designing and interpreting survey-based communication research.

Design Considerations for Survey Research

Some notes on question design

Until now, we have explored various fundamental aspects of surveys, including their appropriate utilisation, advantages, disadvantages, and diverse methods of administration. In this section, we will delve into specifics, focusing on the art of formulating clear and comprehensible questions that yield actionable data, along with strategies for effectively presenting these questions on your questionnaire.

To construct questions that generate meaningful insights, researchers should consider the following guidelines.

  • Aim for Clarity and Conciseness: Craft questions that are succinct and unambiguous. Survey questions should be straightforward, avoiding unnecessary complexity. Lengthy or intricate phrasing can perplex respondents and compromise data accuracy. For instance, instead of asking, “In your daily routine, how frequently do you engage in the act of viewing television programs, including both cable and satellite channels, on a scale from never to always?” simplify to “How often do you watch TV?”
  • Make sure Questions are Relevant: Frame questions pertinent to your respondents’ knowledge and experiences. Ensure that your inquiries match their familiarity with the subject matter. Inquiring about Brian Mulroney’s decisions during a historical event is irrelevant when surveying today’s youth who have no personal experience or understanding of the event. Or asking respondents about their sentiments regarding Canadian gun control legislation might be outside the scope of their knowledge.
  • Avoid Double Negatives: Construct questions that are free from the use of double negatives that may hinder comprehension. For instance, instead of “Did you not find the classes in your first semester to be less demanding and interesting than your high school classes?” rephrase as “Did you find the classes in your first semester more demanding and interesting than your high school classes?”
  • Consider Cultural and Regional Sensitivity: Ensure that your survey questions are culturally and regionally inclusive, avoiding terms or references that may not be universally understood. This ensures that respondents from diverse backgrounds can accurately interpret and respond to the questions. Instead of asking about “pub hopping” in a survey targeting an international audience, opt for a more universally recognisable term like “visiting multiple bars or pubs in one evening.” Abbreviations are terms or shortcuts used within a specific context or group should be avoided as well. As an example, MRU should be Mount Royal University, COMM should be communication studies.
  • Avoid Double-Barrelled Questions: Refrain from combining multiple questions into a single sentence, as this can lead to unclear interpretations and unreliable responses. Each question should focus on a single aspect to elicit accurate and meaningful data. Rather than asking, “Did you find the classes you took in your first semester of college to be more demanding and interesting than your high school classes?”, separate this into two distinct questions: “Did you find the classes more demanding than your high school classes?” and “Did you find the classes more interesting than your high school classes?”
  • Avoid Leading Questions: A leading question is a type of survey or interview question that suggests a particular answer or influences the respondent’s opinion through its wording or phrasing. Leading questions can unintentionally bias the participants and lead them to provide responses that may not accurately reflect their true beliefs, attitudes, or experiences. Instead of framing a question to imply a specific answer, use neutral and unbiased language that does not push respondents toward a particular response. Some examples are below:

Leading Question: “Don’t you agree that our new product is the best in the market?”

Improved Question: “What are your thoughts about our new product?

Leading Question: “Do you think our environmentally friendly practices are better than our competitors’ insufficient efforts?”

Improved Question: “How do you view our environmental practices compared to our competitors?”

  • Avoid Prestige Bias Questions: A prestige bias question is designed to elicit responses that portray the respondent in a positive or socially desirable light. These questions often tap into a desire to present oneself favourably to others or to conform to perceived societal norms. Respondents may choose options that align with what they believe is socially esteemed, rather than accurately reflecting their true behaviours or attitudes. Avoid questions that may make respondents feel pressured to give a particular response based on societal norms or expectations.

Prestige Bias Question: “Do you support our campaign to end poverty?”

Improved Question: “What are your thoughts on our campaign to address poverty?”

Prestige Bias Question: “Experts suggest we can make a difference with our everyday actions. Do you regularly engage in environmentally friendly practices?”

Improved Question: “How often do you engage in environmentally friendly practices?”

A leading question and a prestige bias question both involve influencing respondents’ answers, but they do so in slightly different ways and for different reasons. The aim of a leading question is often to guide or steer respondents toward a specific response, whether intentionally or unintentionally. Leading questions can bias survey results by prompting participants to provide answers that may not accurately reflect their true beliefs, opinions, or experiences. A prestige bias question is designed to elicit responses that portray the respondent in a positive or socially desirable light. These questions often tap into a desire to present oneself favourably to others or to conform to perceived societal norms. Respondents may choose options that align with what they believe is socially esteemed, rather than accurately reflecting their true behaviours or attitudes.

  • Seek Feedback: Prioritise obtaining feedback on your survey questions, particularly from individuals who resemble those in your sample. Multiple perspectives enhance the likelihood of creating questions that are clear and comprehensible to a diverse range of participants. Engage with individuals who share characteristics with your intended participants to refine question clarity and relevance. A great way to do this is a pretest before the official data collection phase of your project. The primary purpose of a pretest is to identify and address any potential issues, errors, or ambiguities in the survey instrument before launching it to the larger sample.

In terms of design:

  • Strategically Use Filter Questions: Employ filter questions judiciously to identify specific subsets of participants for targeted follow-up inquiries. This approach streamlines the survey and tailors questions to relevant respondents. As an example, begin with a filter question like “Do you own a pet?” before delving into pet-related queries. Respondents answering “yes” proceed to the next section, ensuring the relevance of subsequent questions.

By adhering to these practical guidelines, researchers can construct survey questions that effectively elicit valuable and reliable data. These considerations ensure that respondents comprehend and respond candidly, ultimately enhancing the quality and usability of survey results.

Some notes on response options

Ensuring clarity in your survey questions is essential, but the clarity of response options is equally vital.

A Likert scale is a widely used survey tool that measures respondents’ attitudes or opinions on a given topic. It consists of statements to which participants rate their agreement using a numerical scale, typically ranging from “Strongly Disagree” to “Strongly Agree.” This structured approach provides quantifiable data that can be statistically analysed, allowing researchers to draw conclusions and identify trends. Likert scales are adaptable, standardised, and suitable for large samples, making them effective in collecting and comparing subjective data across diverse groups. They offer clear visualisation and are widely recognised.

Most survey researchers prefer closed-ended questions with predetermined choices over open-ended questions because they provide structured response options that are easier to analyse and quantify. This format simplifies data collection, analysis, and comparison across respondents, enhancing the efficiency and reliability of survey results. Additionally, closed-ended questions help minimise respondent fatigue and maintain survey engagement, making them a practical choice for gathering large amounts of consistent and actionable data.

Below are some other key tips.

  • Aim for One Response Answers: Generally, respondents select a single, or occasionally multiple, response options for each question. However, allowing multiple responses to a single question can introduce complexities during result analysis. A good rule of thumb is to aim for only one selected response per question.
  • Ensure Answers are Mutually Exclusive: Mutually exclusive means that response categories should not overlap. Imagine you are conducting a survey about people’s preferred age ranges for certain activities. You have a question asking respondents to select their preferred age group for participating in outdoor sports and you offer the following as choices:
    • 18-30 years
    • 30-40 years
    • 40-50 years
    • 50-55 years
    • 50+ years

In this case, the response categories are not mutually exclusive because there is overlap between adjacent age ranges. There is overlap for those who are 30, 40, 50 (i.e. they could pick multiple responses that are all correct) Thus, the response categories for age groups are not mutually exclusive in this example.

  • Ensure Response Options are Exhaustive: This means that responses should include every potential answer. For instance, asking “How often do you exercise?” with the following response options is not exhaustive:
    • 1-2 times a week
    • 3-4 times a week
    • 5 or more times a week

The response options are not exhaustive because they do not cover all potential exercise options. Respondents who exercise frequently might not find a suitable option, leading to inaccurate data. An easy fix is adding “I do not exercise” as it provides a comprehensive choice for those who do not engage in physical activity. “Other (please specify)” is also a great choice if you want to allow some freedom of choice and improve your options for future instrument use.

  • Avoid Offering Vague or Unclear Responses: Using the same question as move: “How often do you exercise?” and offering the following response options is problematic:
    • Rarely
    • Sometimes
    • Often

Without specific frequency ranges, respondents may interpret these terms differently based on their individual perceptions, leading to subjective and potentially inconsistent responses. This is why the choices of 1-2 times a week, 3-4 times a week, 5 or more times a week, and “I do not exercise” are much better response options.

  • Avoid Response Options for Fence-Sitters and Floaters: Fence-sitters opt for neutral responses, even if they hold opinions, possibly due to socially sensitive views. Conversely, floaters select answers despite lacking understanding or opinions. Balancing these tendencies hinges on research goals. Delving into respondents with no opinion might be desirable in certain cases, while assuming respondent familiarity with all topics might warrant forcing an opinion choice.

For example, say you ask the following question: On a scale of 1 to 5, how satisfied are you with the quality of customer service at our store?”

Response Options:

    • Very Dissatisfied
    • Somewhat Dissatisfied
    • Neutral
    • Somewhat Satisfied
    • Very Satisfied

The Fence-Sitter Response will be “Neutral” and the Floater Response “Very Satisfied.” In this scenario, the “Neutral” response (fence-sitter) doesn’t provide much insight into the respondent’s actual level of satisfaction, as it could indicate uncertainty or a lack of strong opinion. On the other hand, the “Very Satisfied” response (floater) might not accurately reflect the respondent’s true sentiment and could be chosen without genuine conviction.

A revised option for responses could be:

    • Very Dissatisfied
    • Somewhat Dissatisfied
    • Neither Satisfied nor Dissatisfied
    • Somewhat Satisfied
    • Very Satisfied

By adding the “Neither Satisfied nor Dissatisfied” option in the improved version, the fence-sitter response (e.g., choosing “3. Neither Satisfied nor Dissatisfied”) and the floater response (e.g., choosing “5. Very Satisfied” without strong conviction) can be better addressed. This revised option allows respondents to express their true sentiment even if they feel their satisfaction level falls in between.

  • Aim For Balanced Response Options: A survey with balanced response options is more likely to measure what it intends to measure, improving the validity of the collected data. For example, there is a problem if you offer these responses:
    • Unhappy
    • Neutral
    • Happy
    • Very Happy

This scale is weighted in the positive; there are two positive options and only one negative.

An improved response scale is:

    • Very Unhappy
    • Unhappy
    • Neutral
    • Happy
    • Very Happy

This scale provides a more balanced assessment of choices and does not skew the responses towards a more positive result.

In terms of design:

  • Consider a matrix question type that groups a set of questions under identical answer categories. This simplifies respondent navigation and maintains consistency throughout the survey.

A sample matrix can be seen in the figure below:

Figure 7.1

Sample of a Matrix Question

image

Other design tips

Designing effective surveys requires careful consideration to ensure accurate and meaningful data collection. Here are some other top survey design tips to help you create surveys that yield reliable and insightful results:

  • Provide Clear Instructions: Offer clear instructions at the beginning of the survey to guide participants on how to proceed, what’s expected, and how their responses will be used.
  • Consider Question Order: Organise questions logically and flow naturally. Start with general and non-sensitive questions before progressing to more specific or sensitive ones.
  • Use a Mix of Question Types: Incorporate a variety of question types, including multiple-choice, Likert scale, open-ended, and demographic questions, to capture different aspects of the topic.
  • Keep it Concise: Keep the survey concise and focused to maintain participants’ interest and prevent survey fatigue. Long surveys can lead to incomplete responses or higher dropout rates.
  • Where Applicable Include Progress Indicators: Include progress bars or indicators to show respondents how far they’ve come in the survey, helping to manage their expectations and encouraging completion.
  • Anonymity and Confidentiality: Assure respondents of the confidentiality or anonymity of their responses, especially when sensitive or personal information is being collected.
  • Test and Review: Before launching the survey, thoroughly review it for errors, typos, and inconsistencies. Test the survey on different devices and platforms to ensure a smooth experience.

Analysis of Survey Data

This text primarily focuses on designing research, collecting data, and becoming a knowledgeable and responsible research consumer. We will not spend as much time on data analysis or what to do with our data once we have designed a study and collected it. However, we will spend some time in each of our data-collection chapters describing some important basics of data analysis that are unique to each method. Entire textbooks could be (and have been) written entirely on data analysis. If you have ever taken a statistics class, you already know much about how to analyse quantitative survey data. Here, we will go over a few basics that can get you started as you begin to think about turning all those completed questionnaires into findings you can share.

From Completed Questionnaires to Analysable Data

It can be very exciting to receive those first few completed surveys back from respondents. Hopefully, you’ll even get more than a few back, and once you have a handful of completed questionnaires, your feelings may go from initial euphoria to dread. Data is fun and can also be overwhelming. The goal with data analysis is to be able to condense large amounts of information into usable and understandable chunks. Here we’ll describe just how that process works for survey researchers.

As mentioned, the hope is that you will receive a good portion of the questionnaires you distributed back in a completed and readable format. The number of completed questionnaires you receive divided by the number of questionnaires you distributed is your response rate. Let’s say your sample included 100 people and you sent questionnaires to each of those people. It would be wonderful if all 100 returned completed questionnaires, but the chances of that happening are about zero. If you’re lucky, perhaps 75 or so will return completed questionnaires. In this case, your response rate would be 75% (75 divided by 100). That’s pretty darn good. Though response rates vary, and researchers don’t always agree about what makes a good response rate, having three-quarters of your surveys returned would be considered good, even excellent, by most survey researchers. There has been lots of research done on how to improve a survey’s response rate.

Suggestions include personalising questionnaires by, for example, addressing them to specific respondents rather than to some generic recipient such as “madam” or “sir”; enhancing the questionnaire’s credibility by providing details about the study, contact information for the researcher, and perhaps partnering with agencies likely to be respected by respondents such as universities, hospitals, or other relevant organisations; sending out pre questionnaire notices and post questionnaire reminders; and including some token of appreciation with mailed questionnaires even if small, such as a $1 bill.

The major concern with response rates is that a low rate of response may introduce nonresponse bias into a study’s findings. What if only those who have strong opinions about your study topic return their questionnaires? If that is the case, we may well find that our findings don’t at all represent how things really are or, at the very least, we are limited in the claims we can make about patterns found in our data. While high return rates are certainly ideal, a recent body of research shows that concern over response rates may be overblown. Several studies have even shown that low response rates did not make much difference in findings or in sample representativeness. For now, the jury may still be out on what makes an ideal response rate and on whether, or to what extent, researchers should be concerned about response rates. Nevertheless, certainly no harm can come from aiming for as high a response rate as possible.

Whatever your survey’s response rate, the major concern of survey researchers once they have their nice, big stack of completed questionnaires is condensing their data into manageable, and analyzable, bits. One major advantage of quantitative methods such as survey research, as you may recall from Chapter 2 is that they enable researchers to describe large amounts of data because they can be represented by and condensed into numbers. In order to condense your completed surveys into analysable numbers, you’ll first need to create a codebook. A codebook is a document that outlines how a survey researcher has translated her or his data from words into numbers.

A sample of how a codebook might look can be found below. As you’ll see in the table a short variable name is given to each question. This shortened name comes in handy when entering data into a computer program for analysis.

Table 7:1

Codebook Example

Variable

Description

Respondent ID (ID)

Unique identifier for respondent

Age (AGE)

Age of respondent

Gender (GENDER)

Gender of respondent

Platform (PLAT)

Preferred social media platform

In a codebook, numerical values may be assigned to represent different categories of the “Gender” variable. Here’s an example of how numerical values might be assigned to the “Gender” variable:

Table 7:2

Numerical Values in Codebook 

Gender (GENDER)

Description

1

Female

2

Male

3

Other

If you’ve administered your questionnaire the old-fashioned way, via snail mail, the next task after creating your codebook is data entry. If you’ve utilised an online tool such as SurveyMonkey to administer your survey, here’s some good news—most online survey tools come with the capability of importing survey results directly into a data analysis program.

For those who will be conducting manual data entry, there probably isn’t much we can say about this task that will make you want to perform it other than pointing out the reward of having a database of your very own analyzable data. We won’t get into too many of the details of data entry but will mention a program that survey researchers may use to analyse data once it has been entered. The first is SPSS, or the Statistical Package for the Social Sciences (http://www.spss.com).

SPSS is a statistical analysis computer program designed to analyse just the sort of data quantitative survey researchers collect. It can perform everything from very basic descriptive statistical analysis to more complex inferential statistical analysis. SPSS is touted by many for being highly accessible and relatively easy to navigate (with practice).

Identifying Patterns

Data analysis is about identifying, describing, and explaining patterns. Univariate analysis is the most basic form of analysis that quantitative researchers conduct. In this form, researchers describe patterns across just one variable. Univariate analysis includes frequency distributions and measures of central tendency. A frequency distribution is a way of summarising the distribution of responses on a single survey question.

Here’s an example of a frequency distribution for the “Daily Social Media Use” question from the “Social Media Use Survey”:

Table 7:3

Daily Social Media Use Frequency Distribution

Daily Use (DAILY)

Frequency (N)

Less than one hour

25

1-2 hours

45

2-3 hours

30

3-4 hours

15

More than four hours

10

This data shows us that 1-2 hours is the most common response for those who were surveyed.

Another form of univariate analysis that survey researchers can conduct on single variables is measures of central tendency. Measures of central tendency tell us what the most common, or average, response is on a question.

There are three kinds of measures of central tendency: modes, medians, and means. Mode refers to the most common response given to a question. Modes are most appropriate for nominal-level variables. A median is the middle point in a distribution of responses. Median is the appropriate measure of central tendency for ordinal-level variables. Finally, the measure of central tendency used for interval- and ratio-level variables is the mean. To obtain a mean, one must add the value of all responses on a given variable and then divide that number of the total number of responses.

Let’s consider an example of a communication research study that examines the number of hours individuals spend on social media per day:

Data: 1, 2, 3, 4, 5, 5, 6, 7, 8, 10

The mode is the value that appears most frequently in a dataset. In this case, the number 5 appears twice, which is more frequent than any other value. Therefore, the mode for this dataset is 5 hours.

The median is the middle value when data is arranged in ascending order. If there is an even number of values, the median is the average of the two middle values. Arranging the data in ascending order: 1, 2, 3, 4, 5, 5, 6, 7, 8, 10. The middle values are 5 and 6, so the median is (5 + 6) / 2 = 5.5 hours.

The mean is the average of all values in the dataset. Adding up all the values and dividing by the total number of values: (1 +2 + 3 + 4 + 5 + 5 + 6 + 7 + 8 + 10) = 51 / 10 = 5.1 hours.

In this communication research example, the mode is 5 hours (as it appears most frequently), the median is 5.5 hours (the middle value of the sorted data), and the mean is 5.1 hours (the average of all values). These measures provide insights into the central tendency of the data distribution and help researchers analyse and interpret communication behaviour patterns.

The sample size, often denoted as N would be 10 for this specific data set.

Bivariate analysis allows us to assess covariation among two variables. This means we can find out whether changes in one variable occur together with changes in another. If two variables do not co-vary, they are said to have independence. This means simply that there is no relationship between the two variables in question. To learn whether a relationship exists between two variables, a researcher may cross-tabulate the two variables and present their relationship in a contingency table. A contingency table shows how variation on one variable may be contingent on variation on the other. Let’s take a look at a contingency table.

Table 7:4

Contingency Table Example

Age

Less than 1 hour

1-2 hours

2-3 hours

3-4 hours

18-24

25

35

45

10

25-34

15

35

40

5

35-44

25

45

20

5

45 and above

40

30

10

5

In this example, the rows represent different age groups (18-24, 25-34, 35-44, and 45 and above), and the columns represent different ranges of daily social media use (Less than 1 hour, 1-2 hours, 2-3 hours, 3-4 hours, and more than 4 hours). The numbers in the cells indicate the count of respondents falling into each combination of age group and daily social media use category. This contingency table provides an organised way to visualise how social media use is distributed across different age groups.

Researchers also sometimes collapse response categories on items such as this in order to make it easier to read results in a table. For example, to simplify this table you could have two age groups instead 18-34 years olds and 35 and above.

Researchers interested in simultaneously analysing relationships among more than two variables conduct multivariate analysis. We won’t go into detail here about how to conduct multivariate analysis of quantitative survey items here, but it is connected to the discussion of statistical significance and p-values discussed in Chapter 3.

Below is a sample of the work SPSS might do to calculate such numbers.

Figure 7.2

SPSS Example

image

In this example, a multivariate regression model includes three independent variables: “Age,” “Education Level,” and “Social Media Use.” The regression coefficients for each independent variable and the intercept (β0) are provided. The model summary includes the R-squared value, adjusted R-squared value, and the standard error.

The p-value for “Social Media Use” is 0.032, indicating that there is a statistically significant relationship between Social Media Use and the dependent variable (e.g., Happiness), even after accounting for the effects of the other independent variables.

Please note that this example is simplified and does not represent actual data or analysis. Multivariate regression analysis would typically be performed using statistical software and actual data.

Reflection Question

After learning about various types of surveys, their administration methods, and the design considerations involved in crafting effective survey questions, reflect on a potential research topic or area where surveys could play a crucial role. Consider the type of survey that might best suit your research objectives, the administration method that aligns with your target population, and the specific design considerations you’d need to keep in mind to ensure the validity and reliability of your survey data. How might the use of surveys in your chosen area of study contribute to a better understanding of the subject matter? Document your thoughts in a 200–300-word post.

Key Chapter Takeaways

  • Surveys are systematic methods used to collect data, opinions, attitudes, or behaviours from a targeted group of individuals. They involve structured questionnaires, interviews, or online forms and offer a powerful tool for researchers to gather information on a wide range of topics in various disciplines.
  • Surveys can be categorised based on time (cross-sectional and longitudinal) and administration methods (online, paper-and-pencil, telephone, face-to-face, mixed-mode). Cross-sectional surveys provide a snapshot at a specific point in time, while longitudinal surveys track changes over time. Different administration methods offer unique advantages and drawbacks, influencing factors such as accessibility, engagement, and data quality.
  • Surveys are extensively used in communication research; they offer valuable insights into various communication-related phenomena and contribute to advancing communication theory and practice.
  • Researchers should craft clear and concise survey questions by avoiding complex phrasing that can confuse respondents. Questions should be relevant to the respondents’ knowledge and experiences. Double negatives, cultural insensitivity, and ambiguous terms should be avoided. Leading and prestige bias questions should also be minimised to ensure unbiased responses.
  • Response options for surveys should be clear, mutually exclusive, exhaustive, and balanced. Closed-ended questions with predetermined choices are preferred over open-ended questions as they are easier to analyse and quantify.
  • Researchers aim to condense completed surveys into analyzable data to identify patterns. Univariate analysis, which includes frequency distributions and measures of central tendency, helps describe patterns across single variables. Bivariate analysis assesses covariation between two variables using contingency tables. For more complex relationships involving multiple variables, multivariate analysis, such as regression, is conducted to identify statistically significant relationships.

Key Terms

Survey Research: A quantitative data-collection method where a researcher presents predetermined questions to an entire group, sample, or individuals to gather information.

Cross-Sectional Survey: A survey conducted at a single point in time, providing a snapshot of respondents’ circumstances and insights into that specific moment.

Longitudinal Survey: A survey that spans an extended period, allowing researchers to observe changes or trends over time.

Trend Survey: A type of longitudinal survey focused on tracking shifts in people’s inclinations and behaviours over time.

Panel Survey: A longitudinal survey involving consistent participation from the same individuals across multiple administrations.

Cohort Survey: A longitudinal survey where researchers regularly collect data from a specific group of individuals of interest.

Retrospective Survey: A survey similar to longitudinal studies, examining changes over time, but administered only once. Participants report past events, behaviours, beliefs, or experiences.

Double-Barrelled Questions: Questions that combine multiple queries into a single question, potentially leading to confusion or biassed responses.

Leading Question: A leading question is a type of survey question that unintentionally or intentionally guides respondents towards a particular answer by suggesting a certain perspective or bias. Leading questions can influence participants’ responses and introduce bias into survey data, potentially distorting the accuracy of the findings.

Prestige Bias: Prestige bias occurs when respondents feel compelled to provide answers that align with socially desirable or prestigious beliefs or behaviours. This bias can lead to inaccurate survey responses as individuals may be motivated to present themselves in a favourable light, rather than expressing their genuine thoughts or experiences.

Filter/Contingency Questions: Questions designed to identify a subset of survey respondents for additional, relevant questions.

Close-Ended Questions: Questions where respondents choose from a limited set of predetermined response options.

Open-Ended Questions: Questions that allow respondents to provide free-form, open responses.

Likert Scale: A Likert scale is a commonly used survey response format that measures the strength of respondents’ attitudes or opinions towards a statement or question. The Likert scale provides a structured way to quantify subjective perceptions and gather valuable data for analysis.

Mutually Exclusive Response Categories: Response options that do not overlap, ensuring clear and distinct choices.

Exhaustive Response Categories: A set of response options that covers all possible answers, leaving no gaps.

Social Desirability: The tendency for respondents to answer questions in a way that portrays them favourably or conforms to social norms.

Fence Sitters: In survey research, fence sitters refer to respondents who consistently select neutral or middle-of-the-road response options, avoiding extreme opinions or positions.

Floaters: Floaters are survey respondents who provide answers to questions even when they lack knowledge, understanding, or a genuine opinion about the topic. Floaters may choose random or arbitrary responses without considering the question’s content, potentially introducing noise and inaccuracies into survey data. Floaters’ responses may not genuinely reflect their true perspectives, leading to unreliable or distorted findings.

Response Rate: The percentage of completed questionnaires returned, calculated by dividing the number of completed surveys by the original distribution.

Code-Book: A document detailing how a survey researcher translates textual data into numerical codes for analysis.

Mode: The most frequently occurring response in a dataset, commonly used for nominal-level variables.

Median: The middle value in a distribution of responses, useful for ordinal-level variables.

Mean: The average value in a distribution of responses, a measure of central tendency for interval and ratio-level variables.

Contingency Table: A tabular representation illustrating how variations in one variable may relate to variations in another variable.

Multivariate Regression: Multivariate regression is a statistical analysis technique used to model the relationship between a dependent variable and multiple independent variables.

Further Reading and Resources

Elon University Poll. (2014. September 26). 7 tips for good survey questions [Video]. YouTube. https://www.youtube.com/watch?v=Iq_fhTuY1hw

Smith, S. (2013, January 14). Common mistakes in survey questions: Survey Questions 101: Do You Make any of These 7 Question Writing Mistakes? http://www.qualtrics.com/blog/writing-survey-questions/

Tencer, D. (2013, August 21). The impact of leading questions: Canada Wireless Survey: 8 in 10 oppose government’s rules in telecom-sponsored survey. Huffington Post.http://www.huffingtonpost.ca/2013/08/21/wireless-survey-canada-verizon_n_3790792.html