Tom Sanders and Felicity Mellor disagree
Every day, nutritional claims are made in the popular press. Although we have previously shown that many of these claims are false and lack scientific support,1 no previous studies have systematically reviewed the quality of the claims.
In order to address this issue, we selected a week at random and identified and analysed all nutritional claims made in the top 10 newspapers in the UK media.2 The criterion for inclusion of the article was that it had to make a ‘health claim’ about a food or drink which could be interpreted by the reader as advice.
Seven of ten claims below bar
The definition of a health claim was taken from the European Health Claims Regulation: ‘a claim that states, suggests or implies that a relationship exists between a food category, a food or one of its constituents and health.’ For example, ‘red wine protects against heart disease’ would constitute a dietary health claim. ‘Oranges contain vitamin C’ would not, unless a claim about the effect of the vitamin C was made: for example, ‘oranges contain vitamin C which prevents a cold.’ We excluded claims relating to dietary supplements.
Each claim was then graded judged against two series of evidence-based benchmarks developed by the World Cancer Research Fund (WCRF) and the Scottish Intercollegiate Guidelines Network (SIGN). The WCRF benchmarking scheme categorises evidence into convincing, probably, possible, and insufficient. The World Health Organisation has suggested that dietary advice should only be based on evidence which is convincing or probable.
We identified 111 claims in the review period. They were more likely to be made at the weekend and be replicated on Tuesday and Wednesday, with fewer appearing on Thursday or Friday. Assessed by WCRF and SIGN criteria, 72 per cent and 68 per cent respectively had levels of evidence lower than the convincing or probable categories. Fewer low (insufficient and possible) and more higher (probable and convincing) quality claims were made in broadsheet compared with tabloid newspapers.
The limitations of our study were that it only covered one week and the findings apply to the UK. However, press reports in the UK are widely syndicated internationally and so have impact outside the UK.
Newspaper journalists commonly source news from individuals and organisations seeking to raise awareness of a single issue or event. Much of what appears in newspapers is derived from press releases produced by public interest groups, government departments, the food industry, universities, journals and charities.
Press releases with their quotes and pre-digested gobbets of information are often reproduced uncritically. Journalists work to tight deadlines, recognise that they lack competence to evaluate complex information and usually try to seek advice from disinterested experts. Many of these, however, are unwilling to commit time to answering journalistic enquiries.
The adequate communication of food risk to the public remains problematic particularly in the area of hypothetical hazards such as genetically modified foods, food allergies, food additives and pesticides. In part this is due to a ‘knowledge gap’ propagated by a distorted public perception of risk (where perceived food risk is often inversely associated with actual risk) and a reluctance to accept scientific uncertainty and that safety is relative.
The public also seems to be inherently gullible about miracle foods. The growth of social media as a source of self-amplifying information and opinion is likely to further promote fallacies about food, particularly in a society that is obsessed with conspiracy theories.
Overall, it appears that misreporting of dietary advice by UK newspapers is widespread and probably contributes to public misconceptions about food and health. However, some of this misreporting could be prevented by independent experts showing willingness to engage with journalists.
When Tom Sanders’ study of newspaper health reporting was first released online, it was quickly and comprehensively rebuffed  by the Guardian colleagues of his co-author Ben Goldacre. Exercising the level of critical scrutiny that characterises the best science journalism, the Guardian reporters pointed to flaws with the authors’ sampling, sample sizes, and grading system.
Problems with this type of analysis go back a long way. A number of studies in the 1970s and 1980s looked at the accuracy of science journalism, with varying results. One found a mere 10 per cent of newspaper articles about science were error-free; a follow up study, 29 per cent.
Another study found that when scientists were asked to rate the accuracy of newspaper reports based on their own work, 95 per cent thought the reporting was accurate, yet only 59 per cent thought science news in general was accurate. Their assumptions about journalistic standards were not borne out by the evidence of their own experience.5
Contradictory studies such as these don’t tell us much about how accurate newspapers are, but they do tell us that judgements about accuracy are inherently subjective.
Can’t measure meaning
Systematic studies of media coverage of science are important precisely because they help ground perceptions about the media with evidence of actual media output. However, quantitative studies risk producing headline results that mask all the assumptions that are made along the way.
For instance, the early accuracy studies included any spelling mistakes (however trivial or irrelevant) as inaccuracies, confusing poor journalism with poor copy editing. What really matters is whether the meaning conveyed by a report is misleading, but meaning is not something that is easily sliced up and measured.
The paper Tom Sanders reports on doesn’t look at accuracy per se, but it suffers from a similar problem. It extracts health claims from the publication context (readers would expect better-evidenced claims in a news report than in a column about slimming); it disregards any linguistic nuances with which the claims are presented (the conditionals and qualifiers that signal to the reader that this claim is to be taken with a pinch of salt); and it then compares these claims with a rating system aimed at clinicians rather than newspaper readers.
The study classed as ‘insufficiently evidenced’ claims based on individual studies or claims by experts – criteria inherently at odds with the journalist’s job of reporting the statements of authoritative, credible sources. It’s not surprising that almost three quarters of the reports fell short.
Other recent studies have painted a rather different picture. In a study  I carried out for the BBC Trust in 2011, we found no significant inaccuracies in the BBC’s science reporting, but we did find that the majority of reports were reliant on just one source.
In a 2004 study , Tania Bubela and Timothy Caulfield compared newspaper reports about genetic research with the journal papers on which they were based. They found that the majority of newspaper articles accurately reported the scientific findings with only 11 per cent making highly exaggerated claims. However, they also found that risks were underreported compared to the scientific papers, whilst benefits were over-reported.
Journalists are faced with the difficult task of transforming scientific findings into engaging stories. If a peer-reviewed paper in a respected journal makes a claim about, say, the health risks of a common foodstuff, journalists have a moral and professional obligation to report that claim, even if it does not meet the scientific gold standard for clinical advice.
Subjectivity aside, outlawing a story from a credible source contradicts what journalists are meant to do. The question is not whether they report it, but how they report it.