The most potentially valuable part of any survey questionnaire consists of the open-ended questions – that is, the questions to which the respondents can make any reply they want, rather than just ticking one of the limited list of alternatives provided by the surveyor. It is usually the only part of the questionnaire where respondents are free to address the issues that most concern them, rather than the issues that most concern the surveyor. Often these issues are the most important ones.
Too often, though, the value of the responses to open-ended questions is wasted. The reason for this is sometimes a simple refusal by the surveyor or the organization for which the surveyor is working to pay any attention to ideas that conflict with their own analysis of the issues with which the survey deals. Ideas that conflict with this analysis are dismissed out of hand as uninformed or incorrect.
Another reason that this valuable information is wasted is worship of the great god Frequency. One of the most unshakeable tenets of modern society is that an idea is important only when a lot of people hold it. The ascendancy of the opinion poll is an example of this. For example, the test of government policy is no longer whether it works, but whether most people surveyed by the polling companies think it works.
Nevertheless, an idea can be good even if only one person believes it. One of the great advantages of the open-ended question is that it can discover uncommon but intelligent opinions of which the surveyor would otherwise have remained unaware. If the surveyor, however, is content just to examine the most frequent responses, he or she will continue to be unaware of these ideas.
Of course, the most frequent responses to open-ended questions are also valuable. For example, in evaluating customer satisfaction with service, the open-ended questions often tell you what aspects of service are really of most concern to customers. You might find that respondents tend to rate four or five aspects of service low on the rating scales, but that the responses to the open-ended questions are dominated by vehement opinions about only one of these aspects. You then know where to start trying to improve service.
Too often, though, this information is wasted because of unreliable coding. Before summaries of the responses to open-ended questions can be prepared, they have to be combined into categories reflecting general trends in the answers. This process is called coding.
For example, if three people report "I hate my boss", "I detest my boss", and "I loathe my boss", I would probably combine them into a single category called something like Dislike of supervisor. From what I have seen, though, I am certain that many people would code these answers into three categories: hate of supervisor, detestation of supervisor, and loathing of supervisor.
The obsessive-compulsive insistence on distinguishing responses because of trivial differences is part of the bigger problem of unreliable coding. Unreliable coding is simply inaccurate coding. For example, identical responses by two different people my be put in different categories by the same coder, or two different coders may put the same response in different categories.
Ideally, all the open-ended responses will be coded by at least two people, and the reliability of their coding will be assessed with the Spearman-Brown prediction formula. Failing that (and usually it does fail that), clear rules for the classification of responses can be specified, and the conformity of the coding to the rules assessed statistically, or at least by someone who did not do the coding.
Ideally, the person who designed the questionnaire will code the responses. Failing that, he or she will closely supervise the coders. Often, coding is done by staff who have had little connection to a project, and the problems of unreliability are magnified as a result.
Finally, in the original version of this article I recommended calculating the percentage of respondents making a specific response rather than the percentage of responses (since people often make more than one response). If you think mom-response is a sign of satisfaction then you might calculate the percentage of all respondents, regardless of whether they provided an open-ended response to this question. I prefer not to assume that non-response equals satisfaction, but this type of calculation can be helpful in some curcumstances, such as when you have to compare questions.
Open-ended responses should help you resolve any doubts you have about the closed-ended responses, and they may even surprise you. If they do neither of these things, the answer may be to go over them again to see if you have any of the problems outlined in this article.
The Best Questions © 1996, 2020, John FitzGerald
Home page | Decisionmakers' index | E-mail