Myth: Public polls are generally inaccurate or untrustworthy.

Fact: Most polls accurately reflect public opinion.



Summary:

Polls are divided into two types: preference and opinion. Preference polls are extremely accurate. Opinion polls are accurate if the pollster avoids a number of pitfalls. These include wording the question in such a way as to suggest an answer, or placing the question in an order or context that suggests an answer. Polls must question a true cross-section of the entire group, must have high response rates, and must approach the respondents randomly rather than have the respondents approach them. Studies show the public is usually honest with pollsters, except for matters of race, where respondents tend to portray themselves as less racist than they really are.



Argument

Polls are responsible for one of the more amusing phenomena in political debates. When the polls run in their favor, people say: "Just look at the polls!" But when the polls run in the other direction, they say: "I don’t believe in polls." A truer understanding of polling would eliminate this debating tactic forever, at least in many circumstances.

Polls come in two types: preference and opinion. And they are quite different animals.

Preference polls

Preference polls ask you which candidate, party or commercial brand you prefer, with the choices usually being stark and easy to identify. Often the answer is based just as much on your behavior as your opinion. For example, you are either going to vote for candidate Bob, or not. You either belong to the Republican Party, or don’t. You either buy Folgers coffee, or don’t. Because there is only one clear answer, a pollster cannot usually trick a different answer out of you with a biased or suggestive question. And preference answers can be checked against results in real life with a high degree of accuracy. Not surprisingly, preference polls tend to be extremely accurate.

Here is an example of how trustworthy they are:

Final poll predictions of the 1996 presidential election (1)
Pollster Clinton Dole Perot
Reuters/Zogby 49% 41% 8%
Hotline 45 36 8
CNN/Gallup 52 41 7
USA Today 52 41 7
NBC/Wall Street 49 37 9
Harris 51 39 9
ABC News 51 39 7
Pew Research 49 36 8
CBS/NY Times 53 35 9
Actual Result 49 41 8
Average Poll 50 38 8
Av. Difference 1 3 0

On average, the combined major polls came within 1.3 percent of the actual result!

Opinion polls

Opinion polls are different: they ask open-ended questions about your opinions on complicated issues. Do you approve of abortion? The answer could be yes or no, depending on the circumstance. Are you a liberal? Well, that would depend on the specific issue. Do you support your political party? You might support its principles, yet still oppose its leadership. Do you approve of the president’s decision to invade Iraq? You might approve of it as the best option now, but you might have disapproved of the bungled foreign policy that led to this unwanted option.

Because a question can yield several different answers (and qualifications to answers) even from the same person, how the question is worded in opinion polls often determines the answer. This not only creates room for error, but also fraud in the hands of an unscrupulous pollster with an agenda.

An example of this was the American military build-up prior to the Gulf War. In August 1990, four polls asked Americans if they supported sending troops to the Gulf. The questions were worded differently, yet entirely reasonably. And these polls found that 63, 66, 78 and 81 percent of all Americans favored such action. (2)

Here are the most common ways that pollsters can influence the answers:

1. The question’s wording. Sometimes rewording the question with a euphemism yields a different result. In 1941, researchers found that 46 percent of the people were in favor of "forbidding" speeches against democracy, while 62 percent were in favor of "not allowing" speeches against democracy. In 1976, the same experiment found that 19 percent were in favor of "forbidding," while 42 percent were in favor of "not allowing." (3)

2. The context or order of the question. In 1974, the same respondents were asked the same question twice: if they wanted to cut, expand, or maintain defense spending. This question was first asked at the beginning of the interview. It was asked again later, after the respondents were asked the same question about education spending. The second question implied a trade-off between education and defense spending, and, curiously enough, 10 percent more people wanted to cut defense spending the second time around. (4)

3. The options provided by the pollster. In 1986, a Gallup poll found that 70 percent of the people supported the death penalty. But when the respondents were given more options to punish murderers, like life in prison without parole, support for the death penalty dropped to 53 percent.

Polls conducted by advocacy groups often resort to these techniques to produce poll results they like. For this reason, you should be skeptical of polls conducted by advocacy groups. Generally, opinion polls conducted by academic groups, the major news media and nationally respected polling organizations are much more trustworthy.

But opinion polls have grown significantly more accurate over the past several decades as pollsters have learned from their mistakes. Many of these mistakes are famous:

How polls fail

1. Biased samples. Pollsters must be careful to question a true cross-section of the population. In 1936, the Literary Digest polled magazine subscribers, telephone owners and car owners about their choice for president. Based on these answers, the Literary Digest predicted that Alf Landon would defeat Franklin Roosevelt by a landslide. When Roosevelt won by a landslide, the embarrassed magazine eventually realized that its polling audience was unrepresentative of the general population. In 1936, the Depression was well under way and those people who could afford phones, cars and magazine subscriptions tended to be wealthy –- and Republican.

2. Low response rate. To make matters worse, the Literary Digest poll had a low response rate: only 26 percent of the people solicited bothered to participate. This produced a second bias: only the politically passionate, who hated Roosevelt, bothered to respond. To be accurate, a poll must enjoy a high percentage of participation among the people it approaches.

3. Volunteer error. This happens when the public has the ability to approach pollsters, instead of the other way around, thus biasing the sample. For example, in 1996, three network polls showed Pat Buchanan ahead of Bob Dole in the Arizona Primary. Instead, Dole defeated Buchanan. When the pollsters investigated why, they found that Buchanan’s supporters, in large numbers, voluntarily sought out the pollsters, who jotted down their answers.

4. Everything goes wrong. The most famous polling failure of all time was the 1948 presidential election. All the major polls predicted that Thomas Dewey would beat Harry Truman by 5 to 15 percentage points. But instead Truman won by over 4 percentage points. So unexpected was the result that a famous photograph shows a triumphant Truman holding up an errant newspaper headline: "Dewey Defeats Truman."

What happened? "We stopped polling a few weeks too soon," said George Gallup Jr. "We had been lulled into thinking that nothing much changes in the last few weeks of the campaign." Meanwhile, labor was galvanized by the threat of a Dewey victory, while Republicans grew complacent and stayed home on Election Day, confident of victory.

The whole affair showed how polls can influence public behavior. After their blunder, pollsters adopted several reforms. They continued polling right up until Election Day, improved their ability to predict voters from nonvoters, and moved from demographic to random sampling. (5)

How polls get it right

Despite these pitfalls, today’s opinion polls have become more accurate than ever. Pollsters learned from their mistakes, and enacted reforms and safeguards. In fact, today’s polling firmly belongs to the science of statistics.

Pollsters have developed a variety of techniques to overcome the problem of question order and wording. Old polls, such as the Gallup Organization, use the same wording they have used for decades, so public opinion can be measured accurately over time. Pollsters also try asking the same questions in different ways, to see what differences occur. Analysis of these differences has led to a greater understanding of the correct words and question order to use. (6)

In the last few decades, the number of polls has exploded. And this is good, because we can now compare them to each other. Although individual polls may vary occasionally, the consensus of polls do not. Together, they give an accurate and steady portrait of public opinion.

Trusting the public’s answers

Not only have pollsters improved their methods of reading the public, but the public itself can be trusted. In 1992, Benjamin Page and Robert Shapiro published an extensive study of polling data and public opinion. They found that, on the whole, public opinion is "real, stable, differentiated, consistent, coherent; reflective of basic values and beliefs; and responsive (in predictable and reasonable ways) to new information and changing circumstances." (7) One of the surprising discoveries of their study is that American opinions don’t change much over time, or they change very slowly.

A common criticism of polls is that people who do take the time to answer polls are different from those who don’t. But do low response rates affect the accuracy of polls? It appears not, at least under modern techniques. The Pew Research Center tackled this question in two separate surveys. The first polled 1,000 adults in the usual way, over five days. The second contacted 1,201 adults, using much more rigorous techniques over eight weeks. The extra time allowed the pollsters to interview initially reluctant people at more convenient times. On 85 issues, the answers of the second group almost perfectly matched the answers of the first group. The only exception was on matters of race.

And this is the one area where the public fails pollsters; they appear to be less than honest on racial issues. In the standard, 5-day poll, 56 percent of whites said blacks were to blame for their failure to get ahead, compared to 31 percent of whites who blamed it on racial discrimination. But this wasn’t the case in the more rigorous, eight-week poll, where the numbers were 64 percent and 26 percent, respectively. It seems most whites don’t want to appear "politically incorrect" even before anonymous pollsters asking them questions about their racial views. (8)

Another example of this was the 1990 governor’s race of Virginia, where Douglas Wilder, a black Democrat, ran against Marshall Coleman, a white Republican. Although polls showed Wilder winning comfortably, the actual election was extremely close. "It's become unacceptable to admit that you will vote on the basis of race," says sociologist Frances Pestello, "but once voters got into the privacy of the voting booth, it played out a little differently." (9)

In sum, polls can be trusted — if you know how to view them.

Return to Overview

Endnotes:

1. USA Today, November 7, 1996
2. On August 7-8, 1990, CBS asked "Do you approve or disapprove of George Bush’s decision to send U.S. troops to Saudi Arabia?" -- 63 percent approved. On August 9-10, 1990, the New York Times asked "Do you approve or disapprove of the United States sending troops to Saudi Arabia to protect Saudi Arabia from Iraq?" –- 66 percent approved. On August 9-12, 1990, Gallup asked "Do you approve or disapprove of the United States’ decision to send U.S. troops to Saudi Arabia as a defense against Iraq?" –- 78 percent approved. On August 8, 1990, Black asked "Do you approve or disapprove of President Bush’s decision to send troops to help defend Saudi Arabia?" –- 81 percent approved. Cited in John Mueller, Policy and Opinion in the Gulf War, (Chicago: University of Chicago Press, 1994) pp. 197-198.
3. John Mueller, "Trends in Political Tolerance," Public Opinion Quarterly 52, no. 1 (Spring, 1988), p. 11.
4. John Mueller, "Public Expectations of War During the Cold War," American Journal of Political Science 23, no. 2 (May, 1979), pp. 324-325; on context effects, see Howard Schuman, "Context Effects: State of the Past/State of the Art," pp. 5-20 in Context Effects in Social and Psychological Research, ed. Norbert Schwarz and Seymour Sudman (New York: Springer-Verlag, 1992).
5. CNN Interactive, "Pollsters Recall Lessons of ‘Dewey Defeats Truman’ Fiasco," Oct. 18, 1998. Website: http://cnn.com/ALLPOLITICS/stories/1998/10/18/pollsters.mistake.ap/
6. Frank Newport, Lydia Saad, David Moore, "How Polls are Conducted," Where America Stands (Gallup Organization: John Wiley & Sons, 1997). Website: http://www.gallup.com/The_Poll/hpac.asp
7. Benjamin Page and Robert Shapiro, The Rational Public: Fifty Years of Trends in American Policy Preferences (Chicago: University of Chicago Press, 1992), pp. 172.
8. Andrew Kohut, "Bias in Polls: It’s not Political, It’s Racial," American Society of Newspaper Editors, July 1998. Website: http://www.asne.org/kiosk/editor/98.july/kohut1.htm
9. Quoted in "Polling: Through a Glass Darkly," The Why Files (University of Wisconsin – Madison) Website: http://whyfiles.news.wisc.edu/009poll/poll_principles2.html