When to trust the polls: ISR researcher talks sense about surveys
ANN ARBOR—With mid-term elections just around the corner and the political polling season starting to heat up, a researcher at the world’s largest academic survey and research organization, the University of Michigan Institute for Social Research (ISR), provides some insider advice on how to tell good numbers from bad.
In an article titled “Sense and Nonsense About Surveys,” published in the current (Summer 2002) issue of Contexts magazine, ISR senior research scientist emeritus Howard Schuman offers guidance on how to evaluate and interpret the results of surveys and polls.
“Surveys draw on two human propensities that have served us well from ancient times,” says Schuman, a sociologist. “One is to gather information by asking questions. The other is to learn about one’s environment by examining a small part of it — which is the basis of sampling.”
The value of a sample, he explains, comes not only from its size but also from the way it was obtained. In its simplest form, the modern technique of probability sampling calls for each person in the population to have an equal chance of being selected. As a result, most Internet surveys do not even come close to representing the general population adequately, Schuman says, since not everyone has access to the Internet.
Surprisingly, the size of the sample that’s needed depends very little on the size of the population, Schuman says. “For example, almost the same size sample is needed to estimate the proportion of left-handed people in the United States as to make the same estimate for, say, Peoria, Illinois,” Schuman says. “In both cases, a reasonably accurate estimate can be obtained with a sample size of around 1,000.”
The margin of error plus/minus figures featured in many survey and poll reports reflect the size of the sample, but don’t reveal how many people refused to answer or couldn’t be contacted. This non-response rate can affect the poll results if those missed differ from those who are interviewed on the issues being studied, Schuman says. So paying attention to the survey response rate, as well as the margin of error, makes good sense when deciding whether to trust a poll.
“In some federal surveys, the percentage of non-response is small, within the range of five to 10 percent,” he says. “But for even the best non-government surveys, the refusal rate can reach 25 percent or more, and it can be far larger in the case of poorly executed surveys.” With answering machines making direct contacts with people harder, and annoyance with intrusive telemarketers leading to higher rates of refusal, rates of survey non-response have been increasing in recent years. For example, the number of calls needed to complete a single interview in the monthly ISR Surveys of Consumers has doubled between 1979 and 1996.
In the article, Schuman also advocates scrutinizing survey questions before accepting the answers. To explain why, he cites two slightly different versions of a question about freedom of speech, first asked in 1940:
· Do you think the United States should forbid public speeches against democracy?
· Do you think the United States should allow public speeches against democracy?
“Taken literally, forbidding something and not allowing something have the same effect,” Schuman writes, “but whereas 75 percent of the public would not allow such speeches, only 54 percent would forbid them. So clearly the public did not view the questions as identical.”
When the same two questions were asked again in 1975, 46 percent would not allow such speeches while 23 percent would forbid them — a finding that provided confirmation for the initial difference between responses to the two very similar questions while at the same time both questions showed much the same trends toward growing public support for free speech.
By repeating exactly the same survey questions over time, by asking different versions of a question within the same survey, and by using split samples to see how responses vary depending on how the question is asked, researchers — and the public — are able to gain added confidence in survey results. “Comparing responses helps to capture important trends in attitudes and behavior,” says Schuman, “and increases the odds of interpreting survey results accurately.”
The Michigan Program in Survey Methodology, starting this fall, combines social, behavioral and statistical sciences with the goal of improving the quality of what is learned through surveys. Located at the University of Michigan Institute for Social Research (ISR), the interdepartmental program offers three degree programs in survey methods: a certificate, a Masters and a Ph.D. For more information, see www.isr.umich.edu/gradprogram or e-mail [email protected].
Established in 1948, the Institute for Social Research (ISR) is among the world’s oldest survey research organizations, and a world leader in the development and application of social science methodology. ISR conducts some of the most widely-cited studies in the nation, including the Survey of Consumer Attitudes, the National Election Studies, the Monitoring the Future Study, the Panel Study of Income Dynamics, the Health and Retirement Study, the Columbia County Longitudinal Study and the National Survey of Black Americans. ISR researchers also collaborate with social scientists in more than 60 nations on the World Values Surveys and other projects, and the Institute has established formal ties with universities in Poland, China, and South Africa. Visit the ISR Web site at www.isr.umich.edu for more information. ISR is also home to the Inter-University Consortium for Political and Social Research (ICPSR), the world’s largest computerized social science data archive.
RELATED LINKS:
Institute for Social Research >>Howard Schuman >>The Michigan Program in Survey Methodology >>
Contact: Diane Swanbrow
Phone: (734) 647-4416
Email: [email protected]
WWW: http://www.isr.umich.edu