to the British Science Association

We are a registered charity that exists to advance the public understanding, accessibility and accountability of the sciences and engineering in the UK.


Show me content for... +

Show me content for...
Professional development
Families & teenagers (aged 12+)
Families (children aged 12 & under)



Register with us and you can....

  • Sign up to our free e-communications
  • Become a member of the Association
  • Create your own web account, & post comments
  • Be part of British Science Festival
  • Save your favourite items


Keep up to date with the latest news from the British Science Assocation. Sign up to our RSS feeds and take us with you when you are on the move.

You are here

Get involved

Choose from...

What's happening in your area?

PAS 2014: adopting the gold standard

PAS 2014: adopting the gold standard

Tim Silman is a Research Assistant at Ipsos MORI and is part of the team responsible for designing, managing and analysing the survey element of Public Attitudes to Science (PAS) 2014.


In June, Patrick Sturgis, Principle Researcher for the Wellcome Trust Monitor (WTM), posted this blog, advocating “gold-standard” sampling approaches for public opinion surveys on science issues. He’ll be pleased to hear that BIS and Ipsos MORI have taken on board his and other academics’ feedback in designing PAS 2014, moving to a random sampling approach.

Previous PAS surveys used a quota sample where interviews are conducted until quotas representative of the population (e.g. on gender and age) have been met. Interviewers move from house to house until they find an eligible respondent. PAS 2014 uses a random sample, where a fixed set of addresses are drawn at random. Interviewers then select a member of the household at random to take part.

Random samples are generally considered more robust than quota samples. In the former we can calculate the probability that someone was selected to take part, so can carry out statistical significance tests on the results. Strictly speaking, those tests don’t apply to quota samples. In addition, people who do and don’t choose to take part may be inherently different, leading to selection bias. Since we know the response rate in a random sample, we can calculate this bias and therefore how representative the results are. In a quota survey, the number of people who refuse to take part is not necessarily recorded. Whilst quota isn’t necessarily more biased, the bias is unknown. Moreover, as Patrick said in his blog post, quota response rates are estimated at 5-10 per cent, compared with around 40-50 per cent for a random sample.

This change was made in part to bring PAS in line with other respected public opinion surveys on science, such as the Wellcome Trust Monitor. Furthermore, the PAS 2014 Steering Group were keen for the survey to be as robust as possible – the previous blog posts on this website show that PAS is widely used, therefore the upmost confidence is needed in the findings. In addition, tracking changes over time is especially important for PAS, and this is another advantage of random sampling, as the survey can be done in exactly the same way each time, so we can be sure that changes in results across waves are not just down to interviewers making different choices about who to interview.

However, this change in approach does not invalidate previous PAS studies. Whilst random sampling may be the “gold standard”, quota is still widely used, including for large scale influential surveys such as the Eurobarometer. A well-constructed quota sample such as that used in PAS 2011 still minimises the impact of interviewer choice, e.g. by restricting them to certain streets. Studies have also shown that good quota samples can be comparable to random samples, so we can still make meaningful comparisons with previous waves done on quota.

And as well as comparing to previous PAS studies, the move to random sampling lets us legitimately compare results with other studies, such as the Wellcome Trust Monitor and British Social Attitudes surveys that have included questions on science issues. This enables much longer time-series comparisons, going back to 1988. This long-term data also allows us to conduct generational analysis, which has previously shown that people born in different generations have profoundly different views on social issues.

Of course, quota still has its place. For PAS 2014, we are doing a “boost” survey of 16-24 year olds so we have enough young people in the sample to look at their views in more detail. A random sampling approach would make little sense here, as we would have to check lots of households at random before finding someone of the right age group. In this way, quota samples make research among subgroups feasible.

We are currently about half-way through fieldwork for PAS 2014: it’s due to finish this winter, and you’ll be able to see the results in March 2014. I’m really looking forward to producing the most robust PAS data ever, and doing the advanced analysis that our new approach enables.

Join the debate...
Log in or register to post comments