Newspaper reporting often carries exaggerations of health science research outcomes.
Well, yes, sometimes; maybe, even, a lot of the time. But how does it happen?
A study published in the British Medical Journal has found that most of the newspaper exaggeration they found was ‘already present in the text of the press releases produced by academics and their establishments’.
Here are some of the findings:
- two-fifths (40%) of the press releases contained exaggerated advice
- just under a third (33%) contained exaggerated causal claims
- a little more than a third (36%) contained exaggerated inference to humans from animal research
- when press releases contained exaggeration it was more likely that the news would too (58% for advice, 81% for causal claims, and 86% for inference to humans)
- when press releases did not contain exaggeration, rates of exaggeration in news were only 17%, 18%, and 10%, respectively
- there was little evidence that exaggeration in press releases increased the uptake of news.
There are some fairly big confidence intervals around those numbers, but the researchers did conclude that: “Exaggeration in news is strongly associated with exaggeration in press releases.”
The authors stress that as their study was correlational, it did not formally show a causal relationship. The general chatter in journalistic and science PR circles gives support to there being one.
It’s easy to think in terms of blame. Should academics have been firmer with press officers; were press officers under pressure to ‘sex up’ the research; should journalists have been better ‘bullshit’ filters? The last comes from a blog article from Mark Henderson, former science editor for The Times, and Head of Communications at the Wellcome Trust.
It’s heartening to see most of the discussion not on blame but on ways of improving things. In a BMJ editorial, Ben Goldacre, scourge of bad science and bad science reporting, suggests that press releases be closely tied into the academic peer-review processes as gone through by the papers they promote. That could be interesting.
What might it mean for the science press officer?
The last bullet point above is a good starting point. If exaggeration in press releases isn’t really giving much extra coverage, if at all, then why bother? Indeed, it seems that ensuring the caveats are in plain sight doesn’t do any harm either.
That should be linked with professional integrity. Are you a public relations professional or a publicist?
Yes, there will always be the outlying example otherwise. But if integrity wins trust both from the academics with whom you work and the journalists, mightn’t that pay off in the longer run?