Research, analysis, evaluation

The Mystery of Polling
Some polling companies are stingy with the information they give you about their surveys. Sometimes they post only press releases.

Of course, not providing details of a survey may simply be a way of protecting proprietary information. Nevertheless, it leads me to question even more than I have been the dependence of important people on polls, and especially the insouciant acceptance of poll results by journalists. So I thought I'd provide a summary and explanation of the questions you need to ask about poll results if you want to evaluate how reliable they're likely to be.

1. Who paid for it?

Some people consider this the most important piece of information you can have about a poll, although I don't think you can dismiss poll results out of hand just because they happen to be very convenient for the sponsor. Nevertheless, if the results are convenient for the sponsor the following questions become especially important.

2. What group did you survey?

Many surveys reported in the press are reported as surveys of Canadians. In fact, they're usually surveys of Canadian residents with telephones. Well, almost everyone has a telephone, right? Right, but telephoning Canadian residents is not a particularly dependable way to draw a representative sample of Canadians. First of all, you're going to be more likely to get the opinions of people who are often at home. You will be less likely to get people on afternoon shift (because of the hours during which telephone surveys are usually conducted), people with two jobs, people who spend a lot of time outdoors, and so on. While sampling plans can be drawn up to avoid under-representation of some groups, such as the young, who are less likely to be at home, it is impossible to avoid drawing a sample which does not over-represent the sedentary and comfortable.

A related problem is overgeneralized description of the group surveyed. Those of us who have spent long years in the research trade are all too familiar with the experience of finding a description in Psychological Abstracts of an article about the job satisfaction of 18-to-24-year-olds, say, and then finding when we look up the article that it's about the number of dishes broken per shift by 18-to-24-year-olds performing community service at soup kitchens in Lapland. Press releases are usually a bit longer than the descriptions in Psychological Abstracts, but not by much, so the descriptions they provide may also be over-economical. For example, a survey of soldiers may turn out to be only a survey of soldiers at a few bases rather than a survey of soldiers throughout the army, or a survey of officers rather than of NCOs or other ranks.

3. How did you choose the people you surveyed?

Polling companies seem to be very conscientious about sampling. However, the truth is in the details, and the details of the most conscientious sampling plan can affect the truth mightily.  For example, if a survey compares the responses of people from different provinces, you need to know whether the subsamples drawn from each province were the same size or different (usually when they're different it's because they have been made proportional to the provinces' populations). You need to know this because it affects the reliability of any differences found between provinces. The larger the groups being compared, the easier it will be, using a statistical test, to find differences between them. If provincial subsamples are drawn proportional to the provinces' populations, then it will be easier to find a difference between Ontario and Quebec than between Ontario and Saskatchewan, for example. Sometimes people try to get around this problem by testing weighted results (that is, pretending they surveyed more people than they actually did), but that is a Very Bad Thing and people should go to prison for it well, it's completely unjustified statistically and dangerously misleading, anyway.

4. What did you ask them and how did you ask it?

Okay, that's a double-barrelled question, which opinion surveyors are supposed to avoid, but I'm not surveying opinion here. Reports of poll results often provide all the questions asked, but they often do not show you how they were asked. Recent revelations about political polling in Ontario have shown the importance of knowing how questions were asked. At least one pollster has been prefacing satisfaction ratings of the provincial government with "information" about how the provincial government may have been less than effective. I can understand why you might want to do that, but your results obvioulsy do not represent the opinions of the other members of the population from which your sample was drawn.

Similarly, you need to know the order in which questions were asked. For example, if you ask several questions about satisfaction with different aspects of a company's service and then ask for a rating of its service in general, the ratings preceding the general rating will probably affect the general rating, chiefly by clarifying the question for the respondent. On the other hand, if you ask for the general rating first you can often find informative information of a different type. If the general rating is low and the ratings of all the particular aspects but one are high, you've found something that could be useful. So either order can be useful, but you need to know what the order is.

With telephone surveys it helps to know the instructions given interviewers. They usually are following sets of instructions about how to deal with such things as vague answers. These instructions may be more or less explicit, and it helps to know how explicit they were.

In general, in evaluating any survey you really need to see the questionnaire. Pencil-and-paper survey provide both instructions and introductions to the questions, and you need to see those. With mail surveys you also need to see the covering letter.

Questions should also be properly designed. Each question or rating should deal with one issue only.  Half the items should be positive and half negative. And there should be more than one or two you can't assess the reliability of a single question, and you rarely get reliable information by asking only two.

You also need to know when people were called. I used to have an article here about a survey of attitudes towards work which was conducted over a week in the middle of which was Labour Day weekend. That could be expected to affect the results, and it may also affect the response rate, the subject of our next question.

5. What was the response rate?

You need to know what percentage of people selected for the sample actually were interviewed. The lower the percentage, the less generalizable the results are to the population as a whole. That is not always a problem often, for example, surveyors are interested primarily in the opinions of people who have enough interest in the subject of the questionnaire to return it. If you're predicting the voting behaviour of the Canadian people, though, you need a pretty high response rate to have confidence in your results.

In mail surveys the response rate is often calculated as the percentage of surveys not returned as undeliverable, but the number returned as undeliverable should be reported if the number is large there may have been a problem with the mailing list.

If the survey was conducted by telephone you need to know what percentage of people answered the phone and what percentage of those completed the interview. Pollsters can provide this information. I recently received an admirable report of response rate from a polling company. It went so far as to distinguish failures to pick up a ringing phone from responses by answering machines and from busy signals.

6. How did you analyze the data?

The usual answer from pollsters would be that they counted the different answers, worked out the percentages, and made graphs. In opinion research that is just not enough. You need to determine which answers are measuring the same attitude and combine them into a scale, for example. You need to discard items which are not producing useful information.

Differences between subgroups in the sample should be tested with statistical tests, but as I have noted in other articles posted here you can't count on polling companies doing that. Often the report simply states which group had the highest score and which the lowest, as if that was necessarily a sign of a real difference. The danger of that approach is illustrated by an example in the article about fat-free research.

On the other hand, sometimes data are overanalyzed. Thanks to modern statistical software, anyone can do multiple linear regression analysis, and they usually do. Since people trained in this technique can often make some highly questionable decisions in the construction of their regression equations1you can imagine what untrained people can do. Usually they just throw all the predictor variables into the equation at once, ignoring correlations and extreme observations (outliers) if you're not familiar with the technique, that's a very good way to get very inaccurate results. Regardless of the expertise of the constructor of the regression equation, you need to know how they did it.

7. What other information confirms the findings?

Surveys of opinion are often treated as surveys of fact. For example, people's declarations of the party they intend to vote for are treated as if they are infallible predictors of the party they actually will vote for. That's why election polls rarely get within any useful distance of the results of the election (do you remember anyone predicting the collapse of the Progressive Conservatives in 1993?). That's also why extensive market research still doesn't prevent the vast majority of widely released movies from being abominable stinkers.

If, however, independent information of some other type confirms the findings, then the confidence you can have in your results increases. As I have pointed out in another article posted here (In Defence of Opinion Surveys), confirmation by other evidence can turn an opinion survey into a powerful decisionmaking tool. First, though, it has to be properly designed and employed.

1: For example, I recently read a paper in which regression analysis was used to assess the relationship between literacy and income. The author removed the effects of training and type of industry on income before assessing the effect of literacy. Since literacy can reasonably be expected to affect the likelihood of getting training or entering a high-paying industry, he should at least have justified his decision.  [Back to the text]
The Mystery of Polling © John FitzGerald, 1999
 Home pageDecisionmakers' indexE-mail