Public engagement doesn’t always go well. Here, practitioner Steve Cross, evaluator Karen Bultitude and funder Daniel Glaser confess all.
I’m (sometimes) a public engagement practitioner. I mainly learn from failure, but I only talk about success.
Projects I’ve run have failed in numerous ways. I’ve run expensive events in half-full rooms, wasted time, money and goodwill and built things that make me unhappy. Worst of all I’ve run projects that have succeeded in meeting every objective (numbers high, outputs good quality, deadlines met) but which failed in their aims to engage with members of the public, build researchers’ skills or bring together new networks of ‘engaged’ researchers. On paper they looked great, and final grant reports were a breeze to write, but I knew they had been a waste of time and resources. I’m lucky. I have a full-time job that’s just senior enough to allow me to talk about failures without worrying about where my next commission is coming from. I also work with a full-time evaluation officer who provides constant evidence-based feedback on our work, and helps the team to experiment more, fail less and embed learning every time we do fail.
Still there are people to whom I’d never report my failures. In the past I’ve heard about funders and partners who don’t seem to understand what public engagement is really about (or who ‘don’t believe in evaluation’) and seem to judge projects more on ‘quality’ of outputs rather than effectiveness of engagement. There are managers who can’t bear to hear anything but stories of success, especially if there’s a chance that anyone outside the organization will hear the ‘bad news’. I’d never put forward a story of failure on the
PSCI-COM or BIG-CHAT mailing lists!
What I’d like is a culture of honest evaluation. It’s hard to own up to not being perfect when some of your peers will happily tell you that they are. Well-respected figures in science communication have said to me ‘we don’t evaluate’ and ‘we just make up the evaluation at the end’, and reporting honestly is discouraged when some people can succeed without doing so.
Funding panels should be given evaluation reports, helped to interrogate them properly, and make decisions about future funding based on whether applicants are honestly attempting to engage with members of the public better. You shouldn’t be a panel member if you aren’t committed to improved engagement based on rigorous evidence.
Practitioners: It’s time we all allowed evaluators to get into the cracks of our work, to show us the failures and help us get better. Evaluators aren’t there to study the one aspect of your project that you know will work, to convince your funder to give you more money. They’re there to help you, and other practitioners, to do a better job in the future.
The old adage ‘don’t bite the hand that feeds you’ is no doubt a key factor in why we evaluators find it difficult to talk openly about failure. It takes a very brave – and very thick-skinned – person to tell a fee-paying client that their project hasn’t been successful. However, to be true to our work and assist in developing the field, evaluators need to focus more on identifying useful learning points.
So – taking a deep breath – here is the truth: every project that I’ve ever delivered or evaluated has failed in some way. There, I’ve admitted it in print and in public. It was hard, but not impossible. So why is it so common for evaluations to present only glowing reports?
One of the biggest challenges that can lead to the distortion of evaluation findings depends upon who commissions the evaluation in the first place. If it’s the internal project team, then after going to the trouble of commissioning an evaluation they are generally very keen to take into account any feedback received. However, invariably the (edited) report that they subsequently submit to the funder presents a rose-tinted view.
Conversely, if it’s the funder who commissions the evaluation, then my relationship with the project team tends to be a little more wary, but does at least have the advantage of a direct reporting path. ‘Parachuting in’ external auditors can additionally be an excellent way to shift the blame for difficult judgements away from local staff.
It can also be problematic to obtain direct evidence of failure in the first place. If you’re relying on audience feedback, respondents often feel a degree of sympathy for the event organiser, or think they know what the ‘right’ answer should be to the questionnaire, leading to biased results. It’s only through the use of more time-consuming (for example, qualitative) measures that you can achieve a degree of objectivity in the findings.
Even admitting that failure occurs is tough. Perhaps there needs to be a dedicated space within any evaluation report that assumes some failures occurred, for example asking ‘What occurred within this project that should be avoided by other practitioners in future?’ We could even have an annual award – the Best Shared Admission of Failure To All, or BSAFTAs – to recognise efforts in this area.
Another evaluator once advised me that any client should come out of an evaluation meeting feeling ‘more energized’ about their project. This doesn’t mean that you only tell them about successes – but that they should feel that they are in a position to do something about the negatives that you identify. It’s only by making concerted efforts to identify, admit, and overcome failures that we will truly improve.
Anonymous peer review forms the basis of most funders’ evaluation of proposals and this approach has justly been described as the worst possible approach to assessment, apart from all the others. Confidentiality is a key element of it, based on the assumption that reviewers would find it harder to be honest if their identity were revealed to the applicant or if the review were to be made public.
Evaluation of the funded work once it has been completed takes a variety of forms, ranging from full-scale independent evaluation to a brief self-report. Many funders make their own internal assessment of a grant through visiting or sampling the content themselves. Finally, the outcomes and success of previous projects by an applicant will formally or informally feed in to the consideration of a proposal.
Clearly, the extent to which applicants or grant-holders are honest about how the current or previous projects have gone is very variable and it is very often the case that the success of projects is inflated. This is a bad idea for a number of reasons.
Firstly, given the range of experience that funders have of the field in general (as well as their engagement with the project itself), we can almost always tell when people are being less than honest. This means that exaggerating success is pointless at best and risks damaging credibility. We would much rather fund people who acknowledge and learn from projects that don’t go as planned than those who pretend everything is perfect.
It is frequently argued in science that publishing negative results is important in part so that others will not waste time trying things that don’t work, although in practice the research literature has a very strong positive bias. That notwithstanding, there are some domains, for example international development, where discussion of failure is an established practice.
Advance the field
It would seem that the engagement sector has lots to gain also from sharing the ways in which things go wrong. If it’s done in a constructive and optimistic manner, such exploration will advance the field and enhance the credibility of those who practice in the eyes of the community in general and funders in particular.
Finally it is worth noting that funders themselves are not immune from exaggeration. Not all funding schemes are equally successful in achieving their goals, and it could be argued that a more honest public discussion of the limitations and challenges of
a particular programme would be valuable. This is a challenge which we will try to square up to.