How to draw a political bar chart – part 2

In ‘How to draw a political bar chart – part 1’ I looked at some of the ways political bar charts are drawn by campaigners to put their party’s prospects in the best light.

This was on the basis of votes at the last General Election. But what can be done if those aren’t ‘helpful’?

Say the results last time were:

Party 1 16,000
Party 2 12,000
Party 3 7,000
Party 4 3,000
Party 5 2,000

Party 3 is a strong third, but it probably doesn’t want to show that it was third last time.

So what else might it use?

The results in local authority elections are commonly used. Quite often the most recent were held in a different (more recent) year from the General Election. Opposition parties often do better in local elections than governing parties. Parties can also do better in percentage terms because the turnout in local elections is often lower than in general elections.

(It is interesting to note the effects of voting systems on council results. In England and Wales, local authorities use the first-past-the-post system; Northern Ireland and Scotland use the Single Transferable Vote system.)

So, look out for bar charts based on:

  • votes cast at the most recent local election for each party in the wards or divisions in the constituency
  • numbers of councillors in the constituency for each party

But what might Party 3 do if these are still not ‘helpful’? Remember, Party 3 aims to show that it is one of just two ‘horses’ in the election race.

What if the constituency is one of a number within one council area? Party 3 might be second across the whole council in terms of votes or councillors. If so, then you might see see bar charts based on the whole area.

How about the European elections? These are counted at local authority level. Did Party 3 (or even Party 4 or 5) come first or second?

In Scotland, Wales and Northern Ireland there are devolved administrations. Perhaps these provide Party 3 with ‘helpful’ results?

You might even see a recent council by-election – not necessarily in the constituency itself – quoted!

When a ‘helpful’ result is unavailable, parties need to look even further. It’s not uncommon to see bar charts based on the national opinion polls, or even what a party is finding in its own canvassing of electors. (I shall blog on the uncertainties and vagaries of canvassing at another time.)

Whatever the parties use, look out for the same presentation techniques as set out in Part 1 of this guide.

Next time – political bar charts, Big Mo and the first derivative!

NB: none of the above should be seen as endorsing or criticising any particular technique.

How to draw a political bar chart – part 1

No election leaflet worth its salt will come without its very own bar chart to help voters make up their minds. But these are no ordinary bar charts generated by statistical software!

We use the first-past-the-post system to elect our Members of Parliament in Westminster. The candidate with the highest number of votes wins – regardless of whether or not that is more or less than half of those voting.

That encourages tactical voting. Voters decide not to back their preferred candidate but vote for a candidate from the party most likely to beat its main challenger.

But which party is best placed to do that? This is where the political ‘bar chart’ comes in. Parties use them to show that they – and only they – can beat another party. Of course, it may only be a matter of their opinion.

Let’s have a look at the most common situation – Party 2 came second last time to Party 1. It wants to persuade supporters of some or all of Parties 3, 4 and 5 to back Party 2 in order to beat Party 1.

Say the votes last time were these:

Party 1 16,000
Party 2 12,000
Party 3 7,000
Party 4 3,000
Party 5 2,000

A straightforward bar chart would have columns with heights in proportion to the votes cast, like this:


Quite often the bar chart will only have the top three parties.

Now, Party 2 would like to look closer to Party 1. It could just adjust the height of its own column, like this:


But that is a bit obvious. An alternative is to chop the bottom off all the columns (known as truncating  the y-axis):


There is a more subtle way to do this – cover most of the original baseline with a box of text carrying some message, like this:


But Party 2 might still not feel this looks close enough. What it can do is increase the perceived height of its column by repositioning its label to above the column:


Sometimes, to give an impression of momentum an arrow is added on top of the column (note that this is in addition to the height of the column itself), like so:


Next in the series – what parties use when the votes at the last election are ‘unhelpful’.

If you’ve  seen a bar chart that is more than a little creative, please get in touch.

NB: none of the above should be seen as endorsing or criticising any particular technique.

The not-quite-debate – who won the fact check battle?

We’ve had the first skirmishes of the General Election – and there’s no clear winner or loser.

I’m not talking about Cameron or Miliband but the factcheckers who real-time checked each statistical utterance in tonight’s interviews with Jeremy Paxman and audience question and answer sessions.

These are the organisations getting involved that I spotted in my Twitter feed:

As I saw it, they were all doing well. Full Fact and FactCheck, in particular, seemed to be going flat out … at one point absolutely neck and neck as this Twitter screengrab shows.


If you’re interested in following up these factchecks then you’ll find them at FactCheck: are you richer or poorer than five years ago and Full Fact: Four factchecks from the budget. It’s interesting to see that even factcheckers – without political battles to fight – can take slightly different approaches.

There are still six weeks till polling day on 7 May 2015. The outcome of the election may be uncertain. What does seem certain, though, is that this is going to be the most factchecked election ever!

Just how wrong is wrong?

Declaration of interest: In this post I talk about research done by polling company Ipsos MORI, which builds upon work done previously with the Royal Statistical Society on public understanding of statistics on key social issues. I am a member of the Royal Statistical Society and from 2005 till July 2014 was a member of its staff.

UPDATE 2 November 2014: This post has been updated to reflect information in Ipsos MORI’s technical note on the calculation of the index of ignorance.

Question: which country’s people are most ignorant of key numbers about their society?

Polling company, Ipsos MORI, has an answer. It’s Italy.

They have calculated an “Index of Ignorance” for fourteen (14) countries based on the accuracy of responses to nine questions about society.

First, let me be clear – this is interesting research and useful to some extent in the debates around a range of public policy issues. For those of us in the UK going to the polls on 7 May 2015 – the UK General Election – it is particularly important.

But we should be cautious.

Here’s Ipsos MORI’s full ranking – their ‘least accurate’ first.

Position Country
1st Italy
2nd USA
3rd South Korea
4th Poland
5th Hungary
6th France
7th Canada
8th Belgium
9th Australia
10th Great Britain
11th Spain
12th Japan
13th Germany
14th Sweden

How sure of this can we be?

The answer is… who knows?

The problem with any ranking – as sports fans will know – is that it doesn’t matter how big the difference between figures.

So, it could be that Italy is a lot more ‘ignorant’ than the USA, but Sweden just pips Germany to most knowledgeable.

More uncertainty comes from how the polling was done – who was asked and what the sample size was.

Here’s another table…

Country Who How many
Italy 16-64 1,000
USA 18-64 1,000
South Korea 16-64 500
Poland 16-64 500
Hungary 16-64 500
France 16-64 1,000
Canada 18-64 1,000
Belgium 16-64 500
Australia 16-64 1,000
Great Britain 16-64 1,000
Spain 16-64 1,000
Japan 16-64 1,000
Germany 16-64 1,000
Sweden 16-64 500

NB: the figures for ‘how many’ are approximate. Ipsos MORI give either 500+ or 1,000+.

Now, any pollster will caution that the smaller the sample size the greater the uncertainty aka margin of error. For a ‘perfect’ sample survey, this figure is a little under 4.5 percentage points either way for a sample of 500; a tad over 3 percentage points either way for a sample of 1,000. This is for the usual 95 per cent confidence level.

For underlying reasons about survey statistics, the uncertainty figures won’t be quite 3 or 4.5 percentage points. But they are a good guide.

This means the ‘true’ figure for any particular country on any particular question is most likely to be in a range of 6 percentage points or 9 percentage points, depending on sample size.

So the ‘index of ignorance’ has been calculated from numbers that could readily be 3 or 4.5 percentage points out for each of eight questions.

In addition, it has been done by taking the mean average of the individual estimates. The problem with the mean is that any response that is particularly wrong can have, in a sense, a disproportionate effect. One way to think of this is to imagine you and a group of friends at a meeting. The mean income of the group will be massively increased if Bill Gates joins the group unless Bill is a friend already and was already in the room!).

Bear in mind also for two countries (USA, Canada) those polled were aged 18 or above, the rest were 16 plus. And that it’s not clear which average has been used; and what the uncertainty in the ‘correct’ figure for each country is.

This suggests that the ranking can at best be treated as ‘just a bit of fun’.

At worst, though, public perception may be that it is ‘official’ that Italians are more ignorant than those of thirteen other countries.

Apparently, if the English Premier League – a ranking system with some certainty – can’t be settled on points then it goes to goal difference, and then goals scored.If there is still no winner then there’s a play-off.

Thankfully, Great Britain (10th) and Germany (13th) are ranked three places apart. Otherwise, we’d likely lose on penalties after extra time in the World Ignorance Cup!

NB how ignorant are you? Take the Ipsos MORI quiz for yourself.


Turnout matters: No secures 46.7% and Scotland remains in the Union

So, now we know.

With 46.7%, the noes have it, the noes have it. Scotland is to remain within the United Kingdom.

Hang on… No got 55.3% of the votes cast, not 46.7%!

That’s true. But as with all percentage figures, an important question to ask is ‘what is this a percentage of’?

According to the official referendum count web site, there were 2,001,926 No votes. That is, indeed, 55.3% of the 3,691,915 votes cast for both Yes and No in total.

However, not every elector turned out. Scotland saw a massive turnout – 84.6%. This is a record high turnout for Scotland, as fact-checking organisation Full Fact (and many others) have noted.  The electorate for this referendum was 4,283,392. And the No vote is 46.7% of this figure.

Oh, I see. Isn’t that just statistical pedantry?

Well, yes and no…

Aha – I see what you did there! Carry on…

Let’s look at a different election. On 21 August 2014, there was a by-election for a Police and Crime Commissioner for the West Midlands region.

The winning candidate – Labour’s David Jamieson – got 102,561 first preference votes under the supplementary vote system used in these elections. That’s 50.8% of first preferences. The second placed Conservative candidate got 54,091.

OK… a lower percentage than No got in Scotland. But a win’s a win, surely?

Indeed, it is. But the turnout was a dismal 10.4%.

The electorate was nearly two million, not much less than half that of that for the Scottish referendum.

Yet, barely one in ten of eligible votes engaged in making a decision. The newly elected commissioner has a mandate from not much more than one in twenty of the people to whom he is accountable.

Imagine, if the Scottish referendum had seen a similar turnout when voting No. Would this have settled the question for a generation?

Hmm… I doubt it very much! The Yes side would hope to get more of their supporters out in a re-run.

Precisely. Not only has No won the democratic test by getting more than half of the votes cast, it did so by nearly getting half of all votes that could possibly have been cast. It is hard to see how increasing the turnout would have seen a different decision.

Turnout matters. And in this respect all those who voted in Scotland – whether Yes or No supporting – are on the winning side.