"

11 Understanding Polls

image

Learning Objectives for Chapter

  • Recognize why the 2016 election was so important as a case study for drawing public attention to the need for pollsters and media attention to rethink methods and reporting on elections.
  • Identify the history of the poll.
  • Identify the different types that are common during an election and the goals they serve.
  • Evaluate the pros and cons of the different methods in polling for collecting data.
  • Recognise the sorts of questions journalists should be asking of polls.
  • Understand the key aspects that should be considered when interpreting poll results by media professionals and citizens.

Introduction

The 2016 U.S. presidential election profoundly impacted media professionals and pollsters. Donald Trump’s unexpected victory prompted a reassessment of polling methods and media reporting. This event highlighted the challenges of predicting elections and conveying public sentiment accurately.

Traditional polling methods struggled to capture certain demographics, like white working-class voters who strongly supported Trump. “Shy Trump voters” and social bias added to polling inaccuracies. Unpredictable voter turnout and late-deciding voters further complicated the picture.

Media professionals also faced challenges interpreting the election polls, often favouring sensationalism over substantive discussions.

This chapter explores these complexities, factors contributing to polling inaccuracies, media’s role, history, and types of polling. It will also attempt to guide you as a media professional about the questions you should ask of polls and what it means to interpret results responsibly. This should equip you to provide accurate, informed reporting that enhances public understanding and discourse.

The 2016 Election: A wakeup call for media professionals and pollsters alike

In the lead-up to the 2016 U.S. presidential election, the polling industry faced an unexpected and significant challenge: accurately predicting the outcome. The surprise victory of Donald Trump over Hillary Clinton left many pollsters and political pundits confused as to how and why the victory occurred since this did not match up with the expectations. Some extensive work has been done asking why the polls failed to forecast Trump’s success. A few of the key factors identified are explored below.

One factor that contributed to the polling inaccuracies concerned sampling and accurate representation within samples. It has been suggested that traditional pollsters may have struggled to capture the views of white working-class voters, a demographic that strongly supported Trump (Chalabi, 2016).

The phenomenon of “shy Trump voters” and/ or social desirability bias may have also played a role. Some voters who supported Trump may have been hesitant to disclose their preference to pollsters due to concerns about being judged or facing social disapproval and or they may have been less likely to respond to surveys (Mercer, Deane & Mcgeeney, 2016)

Unpredictability of voter turnout and the influence of late-deciding voters in the context also played a role (AAPOR, 2016).

Media professionals, like pollsters, faced challenges in interpreting and reporting on the 2016 U.S. presidential election polls. Making  a blanket statement about all media professionals is difficult, some have acknowledged the limitations and potential misinterpretation of polling data. Here are a few instances of sources discussing the role of media professionals and their handling of polls during the 2016 election.

Sullivan (2016) argued that the scenario was too awful for reporters to imagine so they failed to look deeply at the possibility of a victory and what was driving it.

Duncan, Watts & Rothschild (2017) suggest media professionals struggled to accurately capture the intricate dynamics and implications presented by the polling data. It suggested that media professionals often gravitated towards sensational aspects, controversies, and personalities, instead of focusing on substantive policy issues.

It is crucial to recognize that the media landscape is diverse, with media professionals and news outlets adopting varying approaches and interpretations. While some media professionals anticipated the possibility of a Trump victory, others completely underestimated it making accurate reporting a challenge.

The examples discussed so far shed light on broader observations and discussions concerning the media’s role in comprehending and communicating the intricacies of the polling data during the 2016 election. Which is why there is some time being devoted to understanding polls in this textbook.

The History of the Poll

The first known public opinion poll is commonly attributed to the Harrisburg Pennsylvanian newspaper in 1824. The newspaper conducted a straw poll to gauge public opinion on the presidential election between John Quincy Adams and Andrew Jackson (Crotty, 2014). This poll, however, was not conducted using the scientific methodologies and sampling techniques that we associate with modern polling.

The concept of modern-day polling began to emerge in the early 20th century. The first scientific poll conducted by professional pollsters using modern techniques is often attributed to George Gallup, an American statistician (Gallup, 1972).

In 1932, George Gallup founded the American Institute of Public Opinion, which later was called the Gallup Organisation. Gallup conducted a poll during the 1936 U.S. presidential election between Franklin D. Roosevelt and Alf Landon. This poll accurately predicted Roosevelt’s victory, even though other surveys and experts predicted a Landon win. This success pushed Gallup’s polling methods into the spotlight and established his reputation as a leading figure in the field of public opinion research (Gallup, 1972).

In the early 1940s, Gallup conducted one of the earliest significant public opinion polls (POPs) in Canada on behalf of the Liberal party. This poll aimed to assess the public’s stance on conscription during World War II (Scholars Portal, n.d.).

Since the 1980s, as polling gained more prominence and reached a broader audience, several other major polling firms in Canada, including Decima Research, Environics, Angus Reid, and Ipsos Canada, have become increasingly active in conducting polls (Scholars Portal, n.d.).

Purpose and Types of Polls

Polls serve various purposes in the field of research and media studies. They are primarily conducted to measure public opinion on a wide range of topics, such as political preferences, social issues, consumer behaviour, and much more.

While “public opinion” is often used to suggest a unified perspective, it is important to recognise that individuals within the public hold a range of diverse opinions on any given issue.

Furthermore, it’s worth noting that specific issues typically capture the attention and interest of only certain segments of the population (Rand, 1993).

Polls can also be used to predict election outcomes, track changes in public sentiment over time, inform policy decisions, and provide insights for market research. According to (Polyas, 2023) there are three types of polls commonly used in an election each which is briefly outlined below.

Benchmark polls: These polls are conducted at the outset of a campaign to provide candidates with an initial gauge of their popularity among the electorate. If candidates receive consistently low levels of support, they may reconsider their decision to run for election.

Brushfire polls: Throughout the campaign, candidates rely on these polls to track any progress they are making. Such polls help identify areas where candidates may be facing challenges within specific demographics, allowing them to tailor their strategies and improve their overall performance in the election.

Tracking polls: These polls are conducted periodically, targeting the same group of individuals. Their purpose is to measure shifts in public opinion over time rather than focusing solely on a candidate’s popularity level. By capturing general trends, tracking polls provide insights into the changing sentiments of the electorate.

Data Collection and Polls

When interpreting and reporting on polls, media professionals must carefully consider these pros and cons to assess the reliability and potential biases associated with different polling methods. Understanding the target population and the specific research objectives can help media professionals select the most appropriate polling method for a given situation, ensuring accurate and informative reporting. By being aware of the strengths and limitations of different polling methods, media professionals can effectively navigate the complexities of interpreting poll results and provide the public with a more comprehensive understanding of public opinion.

According to the CBC Poll Tracker (Grenier, 2021), the three most common polling methods in Canada include: telephone, IVR (Interactive Voice Response) and internet. The benefits and challenges to each are listed below.

Telephone:

Polls were conducted via the telephone with live operators conducting the interviews with randomly dialled respondents. This is one of the oldest forms of polling. Its pros and cons are listed below:

Pros:

  • Wide reach: Telephone interviews allow for a broad reach, as they can target both landline and mobile phone users. This increases the likelihood of reaching a diverse range of respondents.
  • Personal interaction: Telephone interviews provide a level of personal interaction between the interviewer and the respondent. This can help build rapport and encourage more in-depth responses.
  • Probing and clarification: Interviewers have the ability to probe and ask follow-up questions, allowing for better clarification of responses. This can lead to richer and more nuanced data.
  • Flexibility: Telephone interviews offer flexibility in terms of timing. Interviewers can schedule calls at a time that is convenient for the respondent, increasing the chances of participation.

Cons:

  • Declining response rates: Response rates for telephone surveys have been declining over the years. People are more hesitant to answer calls from unknown numbers or participate in lengthy interviews, leading to potential non-response bias.
  • Exclusion of certain populations: Not everyone has access to telephones, particularly specific demographic groups such as low-income individuals or those without landlines. This can result in the underrepresentation of these groups in the survey.
  • Potential interviewer bias: Interviewers can unintentionally introduce bias through tone, inflection, or other subtle cues. This can impact respondent’s answers and compromise the objectivity of the survey.
  • Costly and time-consuming: Telephone surveys can be costly and time-consuming to conduct, especially if a large sample size is required. Costs can include hiring and training interviewers, phone charges, and data collection expenses.

IVR (Interactive Voice Response):

IVR surveys are automated.

Pros:

  • Cost-effective: IVR polling reduces labour costs as it doesn’t require live interviewers.
  • Large-scale reach: It quickly collects data from a large number of respondents, making it suitable for broad-scale opinion tracking.
  • Anonymity and privacy: Respondents can express views without fear of judgement or repercussions.
  • Standardised delivery: Consistent question delivery minimises bias and enhances result reliability.

Cons:

  • Limited question complexity: IVR is better suited for straightforward, closed-ended questions.
  • Sample bias: Certain demographic groups may be underrepresented due to phone access limitations.
  • Lack of interviewer interaction: Misses opportunities for detailed insights and clarification.
  • Limited reach to specific populations: Certain groups may be excluded due to language or accessibility barriers.

Internet:

Polls conducted via the Internet. In most cases, respondents come from a panel of Canadians recruited in various ways, including over the telephone.

Pros:

  • Wide accessibility: Internet polls can reach a large and diverse audience since many people have access to the internet worldwide. It allows for the inclusion of individuals who may not have access to traditional polling methods.
  • Cost-effective: Internet polls can be cost-effective compared to other methods. They eliminate the need for paper-based surveys, postage, and manual data entry, reducing costs associated with data collection.
  • Quick data collection: Internet polls can rapidly collect responses due to the ease and speed of online survey distribution. Results can be obtained in real-time or within a short period, providing timely insights.

Cons:

  • Sample bias: Internet polls may suffer from sample bias, as they are conducted online. The respondents who participate may not represent the broader population accurately, as certain demographic groups may be overrepresented or underrepresented.
  • Self-selection bias: Internet polls rely on voluntary participation, which can lead to self-selection bias. Those who choose to participate may have different characteristics or opinions compared to those who opt out, impacting the representativeness of the results.
  • Limited internet access: Internet polls exclude individuals who do not have reliable internet access or those who are not comfortable using online platforms. This can result in the exclusion of certain demographics, potentially skewing the results.
  • Potential for manipulation: Internet polls are susceptible to manipulation and fraudulent responses. Multiple submissions from the same person, automated responses, or strategic voting can compromise the validity and reliability of the data.

There is also a form of hybrid polling which includes a mixture of all three to overcome the challenges of each method.It’s important to consider these pros and cons when evaluating a poll. Other factors such as the target population, research objectives, and available resources should also be taken into account.

Additionally, weighting in polls is a statistical technique used to adjust the results so they better reflect the overall population. When a sample doesn’t perfectly match the demographic characteristics of the larger population (e.g., age, gender, race, or education), pollsters assign different “weights” to responses based on underrepresented or overrepresented groups. For instance, if younger people are underrepresented in the sample, their responses might be given more weight to accurately reflect their presence in the population. This process ensures that the poll results are more representative of the broader population, improving the accuracy of the findings.

Finally, the confidence interval and margin of error are related concepts used in statistics to assess the accuracy of estimates. A confidence interval is a range of values that estimates the true value of a population parameter with a certain level of confidence, typically 95%. It reflects the precision of the estimate, considering sampling variability. The margin of error, on the other hand, is a component of the confidence interval and represents the maximum expected difference between the sample result and the true population value. For example, if a poll result is 50% with a margin of error of ±3%, the confidence interval would range from 47% to 53%, indicating where the true value is likely to fall.

Key Questions to Be Asked of Polls

The American Association for the Advancement of Science’s SciLine (2020) provides some key questions which have been adapted and summarised below:

      • Is this survey truly a legitimate poll? Some campaigns and advocacy groups engage in “push polls” that are not genuine polls at all. Instead of aiming to measure people’s opinions, these polls actively seek to manipulate and change people’s opinions about certain issues or individuals. One indicator of a push poll is the lack of any demographic information being collected.
      • Who sponsored and conducted the poll? If you are not an expert, it is advisable to consult professionals who can assess the reputation of the sponsor. It is important to mention the name of the sponsor in the story to hold them accountable for the work they are supporting. In Canada Léger, Nanos Research, Ipsos and Janet Brown have been recognized for their accuracy in predictions (338, 2020).
      • Who was the target population? Understanding the target population is crucial for interpreting the results accurately. Defining the target population is crucial because it helps ensure that the poll’s findings are applicable and relevant to the specific group of interest. The target population can vary depending on the research objective or the topic being studied. For example, the target population of a political poll might be likely voters in a particular region, while the target population of a market research survey could be consumers who have purchased a certain product. Determining the target population involves identifying the specific characteristics or criteria that define the group of interest. These characteristics can include demographic factors such as age, gender, location, education level, or occupation. The target population may also be defined by other factors such as political affiliation, consumer behaviour, or specific interests. It is important for pollsters to carefully define and describe the target population to ensure transparency and enable readers or users of the poll results to understand the scope and applicability of the findings. The target population provides context for interpreting the results and helps determine the relevance of the poll to specific subsets of the population.
      • How many individuals were sampled and where? The location of the sample provides important context, and larger sample sizes generally contribute to more reliable results. A larger sample size helps reduce sampling errors and provides a more precise representation of the population as a whole.
      • How were the interviews conducted? The methodology used to collect the interviews can indicate the representativeness of the sample. For example, were the interviews conducted through landline and cellular telephones, or were they conducted online and via telephone?
      • When was the poll conducted? The date of the poll is significant for interpreting the results, particularly in fast-changing environments such as politics. For example, specifying that the interviews were conducted from September 15 to November 8, 2019, would provide the necessary time frame.
      • What was the margin of sampling error? Including information about the margin of sampling error is crucial as it represents the uncertainty and range of plausible results. For example, stating that the poll had a margin of sampling error of +/- 6.0 percentage points means that the true results could fall within six percentage points in either direction from the reported results. This is a large result. Oftentimes pollsters aim for +/-3%.
      • Was there any weighting applied? If so, what factors were weighted? For instance, if the results were weighted, it should be mentioned that the weighting aimed to ensure that responses accurately reflected the characteristics of the population in terms of factors such as age, sex, race, education, and phone use. Through weighting, pollsters can also account for this non-response and adjust the data to account for the characteristics of non-respondents. This helps to mitigate potential biases that may arise from differential response rates.
      • What was the response rate? The response rate of a poll refers to the percentage of individuals who participate in the survey out of the total number of individuals who were contacted or eligible to participate. The response rate is an important factor in evaluating the quality and reliability of a poll for the following reasons: high response rate increases the likelihood that the sample is representative of the target population; low response rate can introduce sample bias and undermine the validity of the poll’s findings; the response rate impacts the generalisability of the poll’s results (higher response rates increase the confidence in extrapolating the findings to the larger population); and, Higher response rates generally lead to more precise estimates and narrower confidence intervals, as the larger sample size reduces the sampling variability.
      • What questions were asked and how might they influence the poll? Some things to consider include:

-The order in which questions are asked can prime respondents and influence their subsequent responses. Early questions can shape respondents’ attitudes or perceptions, leading to biassed or skewed responses in later questions. For example, asking negative or positive questions before a specific policy question can impact how respondents perceive and respond to that policy.

-The way a question is framed or phrased can influence how respondents interpret and respond to it. Even slight changes in wording can elicit different responses. Biassed or leading language can introduce a form of response bias, where respondents are subtly directed toward a particular answer or viewpoint.

-Ambiguous or confusing questions can lead to inaccurate or inconsistent responses. It is essential to use clear and concise language to ensure that respondents understand the question correctly. Complex or jargon-laden questions may cause confusion, leading to unreliable data.

To address these issues, pollsters and researchers carefully design and pre-test survey questions to ensure clarity, neutrality, and accuracy. They often employ established best practices in questionnaire design, such as using balanced response options, randomise question order, and avoiding leading or biassed language.

How to Interpret Poll Results

In the world of journalism, and broadcasting, interpreting poll results requires careful attention to various factors to ensure accurate and responsible reporting. Media professionals play a crucial role in presenting poll data in a manner that is informative and unbiased. Here are some key aspects that should be considered when interpreting poll results:

      • Provide Context: Contextualise poll results by comparing them to previous polls or relevant benchmarks. Analyse trends over time and consider the broader social, political, or economic context to understand the significance of the findings. Also, you should not emphasise the results of much on any one poll. Comparing several similar polls makes the most sense.
      • Be Transparent: Clearly explain the methodology used in the poll, including the sampling technique, sample size, and any weighting or adjustments applied. Transparency helps readers evaluate the reliability and generalizability of the results.
      • Understand Margin of Error and Confidence Intervals: Communicate the margin of error associated with the poll results to provide readers with a realistic understanding of the potential variability in the data. Emphasise the margin of error when presenting findings to avoid overgeneralization. An example of poor reporting provided by The American Association for the Advancement of Science’s SciLine (2020) was: in mid-January 2020, certain publications reported that a poll indicated a “majority” or “more than half” of Americans supported the President’s impeachment, conviction, and removal from office. The poll results showed that 51% of surveyed U.S. adults answered affirmatively to that question. However, it is essential to consider the margin of sampling error, which was +/- 3.4 percentage points. This margin represents the range within which the true results for the population could plausibly fall, accounting for the inherent uncertainty of surveying a sample rather than the entire population. With this level of precision, we cannot definitively conclude that over half of U.S. adults shared this opinion. Taking into account the sampling error, the most plausible range of values is between 47.6% and 54.4% (51 minus 3.4 and 51 plus 3.4). Considering this range, it is plausible that only around 48% of all U.S. adults favour impeachment and removal. Therefore, we cannot conclude that there is a “majority” or “more than half” in support of this position.
      • Avoid Oversimplification: Be cautious when reporting complex poll results and avoid oversimplifying the findings. Clearly explain the nuances and limitations of the data to prevent misinterpretation.
      • Beware of Outliers: Be sceptical of poll results that deviate significantly from other reputable polls or established trends. Investigate potential methodological differences or sample anomalies that could explain the outlier results.
      • Use Appropriate Visuals: When presenting poll data visually, use clear and accurate visual representations such as charts or graphs. Ensure the visuals accurately reflect the data and avoid exaggerating or distorting the results. Some examples of these could include heat maps, bar chart races, column charts (stacked or otherwise) (Spure, 2020). In addition, be careful because polls may show leaders in a deadlocked race, but a specific party may be favoured to win more seats which is ultimately what will decide who wins! The popular vote for a specific leader does not translate directly into the seats they secure.
      • Seek Expert Insights: Consult experts in polling or survey research to gain additional perspectives on interpreting the results. Experts can provide valuable insights and help verify the accuracy of the interpretation.

By following these recommendations, media professionals can provide well-rounded and accurate reporting on poll results, enabling readers to better understand and interpret the findings. Furthermore, by paying attention to these factors, media professionals can ensure their interpretation of poll results is accurate, nuanced, and responsible. This contributes to a more informed public discourse and helps the audience gain a clearer understanding of public opinion.

Reflection Question

How might the challenges faced by pollsters and media professionals in accurately interpreting and reporting on the 2016 election polls influence the public’s trust in future poll results and media coverage of elections? Document your thoughts in a 200–300-word post.

Key Chapter Takeaways

      • The media’s role in comprehending and communicating the intricacies of the polling data during the 2016 election is a great example of why journalists need to read polls is so important. Mistakes were made by both pollsters and media professionals who underestimated Trump’s possibility for electoral success and may have impacted the need for voters to go to the polls.
      • Candidates rely on different sorts of polls throughout the campaign to help them formulate their responses to a potential electorate.
      • There are three common forms of polling in Canada (telephone, IVR, and internet). Each brings its own advantages and disadvantages. Being aware of these will make media professionals better able to reflect on the accuracy of the poll.
      • Media professionals have some key questions they must ask of all polls before they report on them such as who conducted the poll, the timing and method used to conduct responses. What was the response rate, sample composition, poll weighting, the types of questions and their order and how this might impact respondents.
      • Media professionals play a crucial role in presenting poll data in a manner that is informative and unbiased. This includes offering context and transparency, avoiding oversimplification, understanding how best to present the margin of error and confidence intervals, being aware of things that stand out as atypical, incorporating appropriate visuals, seeking expert insights.

Key Terms

Social Desirability Bias: Social desirability bias is a phenomenon in which individuals tend to provide responses they believe are socially acceptable instead of expressing their genuine opinions. This bias frequently manifests when addressing challenging topics like abortion, race, sexual orientation, and religion.

Public Opinion Poll: A survey conducted to measure public opinion on a wide range of topics, such as political preferences, social issues, consumer behaviour and other subjects.

Benchmark Polls: Polls conducted at the outset of a campaign to provide candidates with an initial gauge of their popularity among the electorate.

Brushfire Polls: Candidates rely on these polls to track any progress they are making throughout the campaign.

Tracking Polls: These polls are conducted periodically, targeting the same group of individuals.

Telephone Polling: Polls conducted via the telephone with live operators conducting the interviews with randomly dialled respondents.

IVR (Interactive Voice Response) Polling: IVR surveys are automated polls designed to reach a population.

Internet Polling: Is using the internet to recruit and collect data.

Hybrid Polling: This includes a mixture of different polling methods to overcome the multiple challenges of each method found in conventional polling.

Push Polls: These polls actively seek to manipulate and change people’s opinions about certain issues or individuals.

Target Population: Who the pollster hopes to reach.

Weighting: Weighting aimed to ensure that responses accurately reflected the characteristics of the population in terms of factors such as age, sex, race, education, and phone use. This helps to mitigate potential biases that may arise from differential response rates.

Confidence Interval: A range of values that estimates the true value of a population parameter, with a specified level of confidence (e.g., 95%). It indicates the likely range within which the true value falls, reflecting the precision and reliability of the estimate.

Response Rates: The response rate of a poll refers to the percentage of individuals who participated in the survey out of the total number of individuals who were contacted or eligible to participate. It matters because if you have a low response rate your poll will be less reliable even with a large sample size.

Margin of Error: This margin represents the range within which the true results for the population could plausibly fall, accounting for the inherent uncertainty of surveying a sample rather than the entire population.

Pre-test Survey Questions: Questions piloted before a survey to ensure clarity, neutrality, and accuracy by respondents.

Outliers: Poll results that deviate significantly from other reputable polls or established trends.

Further Reading and Resources

Chalabi, M. (2016). Why were the election polls so wrong? How Donald Trump defied predictions. The Guardian. https://www.theguardian.com/us-news/2016/nov/09/donald-trump-exit-polls-data-us-election

Kille, L. W. (2015, April 7). Statistical terms used in research studies: A primer for media. https://journalistsresource.org/tip-sheets/research/statistics-for-journalists

Mercer, A., Deane, C., & McGeeney, K. (2016). Why 2016 election polls missed their mark. Pew Research Centre. https://www.pewresearch.org/short-reads/2016/11/09/why-2016-election-polls-missed-their-mark

Rand, D. (1993). Canadian Politics (critical approaches). Nelson Canada.

Silver, N. (2016). The media didn’t want to believe Trump could win so they looked the other Way. FiveThirtyEight. https://fivethirtyeight.com/features/the-media-didnt-want-to-believe-trump-could-win-so-they-looked-the-other-way/

Watts, D. J., & Rothschild, D. M. (2017). Don’t blame the election on fake news. Blame it on the media. Columbia Journalism Review, 5, 67-84. https://www.cjr.org/analysis/fake-news-media-election-trump.php