We are all used to reading headlines trumpeting what ‘the public thinks’ about some controversial issue, be it in science, politics or anything else. The evidence for such headlines usually comes from the results of opinion polls or surveys.
Readers of People & science will no doubt be keenly aware that because samples, not populations, are surveyed, a simple percentage reported in a headline is only a ‘best guess’ from a range of possible estimates within some margin of error. In practice, published margins of error are all too often quite mistakenly attached to estimates from surveys that do not use scientific, random sampling methods and are consequently meaningless.
Although this is quite clearly a problem, it is a well understood one, and one with a fairly simple solution: spend more money!
Beyond this particular conundrum, though, lurk other dangers.
Varieties of error
Sampling error is only one variety of error to which surveys are vulnerable, and possibly not the most perilous variety at that.
Measurement error – the difference between what we want to measure and what we actually end up measuring - is a much more slippery problem. The ways in which we ask questions, and the types of opportunities we provide for respondents to answer them, matter a great deal in determining the result we obtain. The way questions are worded is important. So are:
Prior questions in a survey interview can have an effect on the way that subsequent ones are answered. One experiment compared reported interest in science from people who had completed a knowledge quiz just before answering, with those who were not asked to complete the quiz. Those who were asked the quiz questions reported significantly lower interest than those who were not.
Survey interviews are social encounters. Respondents may feel under pressure to express opinions where they really hold none. One test of this asked people’s opinions about fictitious or very obscure legislation. Typically 30 percent said they were either for or against such legislation. However, asking people first whether or not they had an opinion at all (thus legitimating not having one) reduced the proportion of ‘pseudo-opinions’ offered by two thirds.
Psychologists have long been aware that people like to say yes and agree with each other more than they like to say no and disagree. One recent survey of public attitudes towards science1 asked respondents to agree or disagree with the statement, ‘I am amazed by the achievements of science.’ Independently of the actual level of agreement with the proposition, more people will appear to be amazed than if they had needed to express disagreement that they were not amazed. Experiments have shown that giving a balanced choice to respondents - asking them ‘do you think science is amazing, or do you think it is not amazing?’ - produces more robust results.
What is to be done?
First, we should endeavour to use current best practice in designing our research instruments. There is an ever-expanding research literature to draw on.
Second, we must admit that a single percentage in a survey cannot speak for itself. Analysis needs to go beyond these ‘news items’ and embrace the investigation of relationships between many variables.
Third, we should consider greater use of open-ended, qualitative questions in large-scale surveys – not just in small studies as is currently the norm.
Finally, we must be clear to consumers of social research about the limitations, as well as the strengths, of our studies.
Asking survey questions certainly began as an art but shows more than a little promise as a science, if we take it seriously enough.
1 Research Councils UK (2008). Public Attitudes to Science 2008: a survey. See www.rcuk.ac.uk/sis/pas.htm