British Politics

Did polls help bring in the Tories?

By admin

January 19, 2016

As voters trudged to polling stations on 7th May 2015, they knew they were participating in the UK’s largest exercise in practical democracy by freely casting their vote for the political party of their choice. Now it seems that they may have been under the influence of misguided – or worse – political pollsters.

As the General Election campaign progress, opinion polls were predicting that support for the Lib-Dems would nosedive, support for the Scottish Nationalist Party (SNP) would soar – and that there would be a hung Parliament in the UK overall. The fact that they were so right on the first two points has been overshadowed by their spectacular error on the third prediction. It was the unpredicted Tory victory that led to a commission of academics being set up to examine why they had got it so wrong.

Their report is out today and concludes that the pollsters were questioning an “unrepresentative” group of respondents – in other words, they spoke to the wrong people. The group was unrepresentative because it relied too much on people answering phone and online polls – the report points out that these methods encouraged more young people to respond and therefore, as young people are more likely to vote Labour, they influenced the poll result disproportionately. There are also suggestions that the younger Labour voters who were more likely to respond to polling organisations were actually less likely actually to go out and cast their vote, and polling organisations failed to take this into account.

There are two other findings. It was suggested that the polls were wrong because there was a late surge of voters deciding to back the Tories at the last minute, too late for the polls to detect. The report concludes that any late surge which may have occurred was too small to have affected the result. It also concluded that there may have been some “herding” of polls, by which they mean that once the first polls started predicting a hung Parliament, the subsequent ones tended to come up with similar conclusions as no one wanted to stand out and look different.

If the reasons for the wrong prediction have now been brought into the open, what remains hidden is any analysis of the consequences it had.

Voters in Scotland were constantly being told that the SNP were massive, would sweep the board and win hands down. How much did this encourage voters to join the winning bandwagon, to go out and vote SNP so that they could be part of the success that was going to happen? How much did this discourage voters from going out to vote for other parties, given that their votes would apparently be wasted as the SNP was so sure to win?

Voters in England were constantly being told that the Tories were likely to be the largest party but would just fall short of winning an overall majority. How much did this encourage diffident voters to go out and give Tories that overall push? How much did that encourage committed Tories to make sure they went out and voted, given that it was so close that their vote really mattered?

One point which seems not to have been explored by the report is whether polling organisations took proper account of key constituencies in their assessment of the vote. Polls reported during the election campaign concentrated on predicting the national share of the vote for the different parties and then predicting the General Election outcome on the basis of the relative standing of the parties. The national share of the vote was not so important at the election itself, where a very small number of votes in a small number of constituencies can disproportionately affect the result. In a first past the post system, the outcome of a General Election can be very different from the national share of the vote.

 

[Adverts]