General election: Johnson appeals to Labour leavers with plan for more state aid for jobs after Brexit – live news


Q: What causes the excessive error in the forecasts of election results?

1. Sampling/computing errors?

2. Methodological errors (wrong assumptions in choosing samples)?

3. People not truly decided until the moment they mark the ballot paper?

4. People saying a plain lie about how they are going to vote?

Have the four points above been quantified, in order to compute the correct tolerance error? Or is the tolerance error itself being subject to a wild and unquantifiable error? Anthony, Maidstone

Pollsters say their results are right to within three percentage points – on either side – 95% percent of the time. That means if the Conservatives are polling at 40% then the actual proportion of Tory support will be somewhere between 37% and 43% 19 times out of 20. When you think about it, this leaves a lot of get out.

Sampling errors have caused problems in the past when, despite their best efforts, pollsters have not generated a model properly representative of the British population – 2015 being one of the best examples. But estimating turnout by demographic groups is probably harder still.

Lying is not considered an issue so much, but people do not always say what they are going to do, or remember correctly how they voted last time (the latter creates an issue for generating representative samples). More important are genuine, last-minute changes of opinion. There was a big movement to Labour in the final stages of 2017, and because polls work back a couple of days, it was not properly picked up.

READ  Labour’s Rebecca Long-Bailey calls on leadership opponents to commit to renationalising energy, water, rail and mail

Q: How do the results of previous election-period opinion polls from the major agencies compare with what actually happened? It strikes me that the answer to this question, on an agency-by-agency basis, might give a hint towards showing any institutional bias within the pollsters’ organisations. Stephen, retired lecturer, Bexhill

Pollsters have made a variety of mistakes, although in their defence a movement of one or two percentage points can have a real impact on the final result.

In 2017, the Conservative vote share was overestimated and the Labour vote share underestimated by most pollsters, and rolling averages reflected this. Some final polls predicted a Tory lead of 13 points, when in fact the gap to Labour was 2.35 points. The 14-day rolling average predicted an eight-point lead. But Survation, for example, predicted a one-point gap.

Is there an anti-Labour bias then? The reverse happened in 2015. Poll trackers had David Cameron’s Conservatives one point ahead going into the election. But the Tories won by 7.5 points in the end. So it is hard to see any long term institutional bias.

In my view, it is best to focus on the polls taken in the last three-four days before an election, and ignore 14-day rolling averages – and look for any evolution in the trend. The late move is the one that counts.



READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here