I was fortunate enough to be asked to sit on a Science and Society Expert Group, charged with reporting to the Science Minister, Lord Drayson, on the tricky topic of Science and Trust. Our final report  covers a lot of ground, and makes a host of recommendations, but its starting point is a conviction that simple talk of ‘responding to the crisis of trust in science’ masks the important issues that arise when science meets society.
Meaning of trust
My colleague Stephen John and I argued that we need to be clear on what is meant when we talk of a ‘crisis of trust in science ’. Is the idea that people don’t trust scientists to do good research, or that people don’t trust scientists to make good policies? Are we claiming that individual researchers are untrustworthy, or that institutional structures are lacking? And is this alleged lack of trust directed towards scientists and scientific institutions, or towards government officials and government institutions that make use of scientific evidence?
Having teased these interpretations apart, and having reviewed empirical evidence, the Expert Group became sceptical of the existence of any generic ‘crisis of trust in science’. Rather, legitimate concerns about science tend to focus on a loosely linked set of themes: the acknowledgment of scientific uncertainty, the consequences of industry involvement in research, the robustness of government mechanisms for responding to technological risks, the incorporation of important societal values into policy production, and so forth. These are distinct problems, which require distinct solutions. Their ultimate resolution is obviously not a simple matter of better communication about the nature of scientific research and the character of scientists.
Engagement as solution
A different expert group—Science for All —was charged with examining the roles of public engagement in detail.3 One of our group’s more specific problems was to understand how increasing ‘engagement’ between scientists, policy-makers and various public constituencies might help solve problems of trust. One sometimes has the impression that ‘engagement’ is simply a fancy way of expressing the old idea that expert scientists need to, and ought to, explain what they are doing to a largely confused public. This undersells the constructive trust-building, and trust-earning, roles that engagement can have.
Sometimes (but not always) the ‘public’ are experts of a sort: a famous case study by Brian Wynne demonstrated the ways in which Cumbrian sheep farmers had valuable local knowledge (about climate, variation in soil types, farming practices) that was neglected when scientists came to measure radioactivity in the wake of the Chernobyl disaster. In these sorts of circumstances, engagement increases the scientific quality of risk analysis.
Sometimes, what look like purely ‘technical’ scientific claims are imbued with contentious evaluative assumptions. During the BSE crisis, Sir Richard Southwood remarked that ‘We have more reason to be concerned about being struck by lightning than catching BSE from eating beef and other products from cattle.’ Perhaps Southwood meant to say simply that the risks of contracting CJD were lower than the risks of being hit by lightning. But he also seemed to imply a far more debateable claim, namely that concern should simply be a function of the magnitude of risk, rather than how the risk is produced. Anyone who thinks that avoidable, man-made risks are worse, morally speaking, than unavoidable ‘acts of God’ might disagree with this. In these sorts of circumstances, engagement helps policy makers to understand hidden concerns about values that make some members of the public sceptical of risk policy.
Engagement is not always the right solution to problems of trust in science, but when it works, it is far more than a simple public relations tool.