Taking advantage of the time zones, Christine Neill emails from Canada with a spreadsheet comparing two predictors of the seat-by-seat results: Simon Jackman’s state-specific poll aggregation, and Simon’s last capture of Portlandbet’s seat-by-seat markets. With several seats still in play, the spreadsheet shows how Christine reads it.
I think the way to interpret these data are that the polls got 12 seats wrong, while the markets got 7 seats wrong (and the only seat that both got wrong was La Trobe, at least on preliminary results).
Update: Mark Coultan assesses the pollsters. My favourite bit:
The big loser was the Herald’s Nielsen poll, which predicted a landslide by a margin of 57 per cent to 43 per cent. The result was outside the range of the poll’s margin of error of 2.2 per cent.
The poll seriously overestimated the ALP’s primary vote. Nielsen also conducted a concurrent internet poll which showed the same result: 57-43.
Nielsen’s pollster, John Stirton, said the most likely reason was a rogue poll; the one-in-20 poll that statistically falls outside of the margin of error. This, he said, happened to all opinion polls, but it was just unfortunate that Nielsen’s had happened just before the election.
All perfectly plausible.Â Given how far away from the true result they were, and the fact that their last two polls were similarly wrong, Simon Jackman persuasively argues that this is implausible. In any case, hopefully it means that we won’t see any talk of ‘the maximum margin of error’ in the SMH over the next election cycle.