The ‘impact’ component in the UK’s upcoming Research Excellence Framework (REF) is raising interest among scientists about increasing their public engagement activities and ensuring they can evidence their impacts. This interest is well-justified given the positive impacts science engagement can offer. For example, my research with visitors to the Cambridge Science Festival shows it is deeply valued. Visitors appreciate the access to cutting edge scientific knowledge and scientists’ openness, enthusiasm and eagerness to engage. However, my prior research shows that science engagement can also lead to negative outcomes, if poorly designed or executed.
Training for engagement
As an experienced evaluation researcher and social scientist, I have seen the full range of effective and ineffective practices within UK science engagement. Enhancing quality - alongside increases in quantity - is achievable, but requires changes in training and evaluation.
At the level of scientists’ preparation for undertaking public engagement, training in the most important and relevant lessons from across the social sciences could improve the likelihood of positive interactions with publics, while limiting the risk of re-enacting long discredited practices. Incorporating a précis of social scientific knowledge covering media literacy, audience reception, learning, communication and sociology as an essential, if necessarily brief, element of scientists’ qualifications could substantially enhance the quality of science engagement downstream, as well as enriching scientists’ education.
While understanding principles of good practice is a reasonable foundation for most scientists delivering quality public engagement, those undertaking science engagement frequently or as a full-time career should also make sure that the impacts of their engagement are rigorously evaluated.
Quality evaluation of audience outcomes not only provides evidence of impact for the REF and other institutional requirements. It can also be a crucial mechanism for avoiding the risk of unforeseen negative outcomes. Good evaluation requires upstream planning and clear objectives, and its results should inform science engagement practice. It also requires additional training in relevant social scientific research methods (for example, survey design).
Unfortunately, institutions sponsoring professionally supported or delivered science engagement activities do not consistently require high-quality evaluation of audience impacts, for example, approaching a standard suitable for publication in a peer-reviewed social scientific journal. Moreover, long-term impacts are hardly ever assessed.
My experience of most of the evaluation research routinely undertaken by professionals and consultancies in UK science engagement is that it rarely satisfies even the most basic methodological standards. For example, imbalanced ‘level of agreement’ scales that skew results towards a positive outcome are commonplace. Indeed, I have used such evaluation reports prepared by UK science communication and museum consultancies in my undergraduate ‘Surveys and Statistics’ module at Warwick University to exemplify poor practice. Within minutes, my students can spot fatal flaws that invalidate these expensive professional evaluations.
Part of the problem is that, in the relatively rare instances that science engagement projects commission full-scale independent evaluations, tenders are often assessed either by those whose work is being evaluated or by staff at the funding institution without relevant methodological expertise. Thus, even the professional side of science engagement is rife with shoddy methodology or no evaluation of actual audience impact at all (for example, mere attendance counts and a handful of positive audience quotations).
This failure to routinely ensure rigorous, falsifiable evaluations which are widely disseminated to enhance practice is hindering the field’s development and impact. Moreover, the tendency to limit evaluation to the engagement event’s duration (as opposed to long-term research assessing impacts over time) is unrealistically myopic and insensitive to contextual factors that modulate impact.
At every level, effective science engagement should be better understood as the complex social process that it is, with concomitant skill, thought and care taken in planning, delivery and evaluation.