Some years ago – well, okay, close to 35 years ago – I started study for O level History.
For our first piece of homework we were given some facts and asked to write an essay that interpreted those facts. I was very pleased – smug, even – to be first to submit an essay. I reckoned that I’d done a first-rate job.
I got a C-minus.
I learned a very important lesson. My essay was well constructed. Good grammar. Good spelling. But its weakness was that it was a work of fiction.
Although I had explained the facts, I’d done so through imagination, not through analysis and interpretation. I was a better History student thereafter.
Today, I would say I learned not to fit a narrative to the data.
Fitting a narrative is something people do naturally. With incomplete data we fill in the gaps, judging whether or not we need to take action.
In early human history it might have been about whether or not we glimpsed a tiger in the shadows. Was it a dangerous predator, or were we just spooked by the chance alignment of sunlight and the movement of leaves in the wind?
Today, it might be instinctively braking and swerving as we drive home in the twilight. Was that someone at the side of the road, or just an odd-shaped tree in the gloom?
Data can give us clues to what is happening, but it doesn’t always tell us exactly what that is. Scientists follow this through all the time – observation, hypothesis, experiment, and back round again.
Fitting a story might not be too bad – if you remain open to gathering new data and, if necessary, changing your mind.
What is worse is when we start with the story and then look for – even invent – the data that supports it. Politicians do this a lot – it’s labelled policy-based evidence making.
And it can happen in PR, too. A client has a product or service that solves some pressing ‘problem’ in society. Its launch is imminent and media coverage is desired. The PR executive commissions a survey and, lo and behold, the data fits the story that the client wants told.
In fact, this should happen a lot. The client company should have done its market research, developing its product or service to meet a gap in the market either of need or desire. The PR-based survey should then, naturally, reflect the situation as it has already been established to be. It’s why I’d advise any PR practitioner to quiz a client on their market research before commissioning a survey.
But, I fear, that’s not always the case. An inspection of some of the stories on badpr.co.uk suggests I’m right. Leading questions, restricted sets of responses, forced choices, self-selecting respondents, tiny sub-samples. All can help provide the ‘evidence’ needed. It makes me wonder what market research was actually done in the first place.
I’m not too troubled when it’s made obvious that it’s “just a bit of fun”. I’ve taken part in my fair share of “which Harry Potter character are you” type surveys on Facebook.
But I wince when the results are presented and promoted as though they were the output of sound scientific research, supported by robust statistical analysis.
I wince even more because the money could have been spent better getting real insights that not only attract headlines, but also are an asset to the client for time to come. Something that actually adds real insights into the potential customer base.
Perhaps there should be an award for the best use of data in public relations? Something to mark out the A-plus campaigns, from the C-minuses?