Sunday-Morning Quarterbacking

Taking advantage of the time zones, Christine Neill emails from Canada with a spreadsheet comparing two predictors of the seat-by-seat results: Simon Jackman’s state-specific poll aggregation, and Simon’s last capture of Portlandbet’s seat-by-seat markets. With several seats still in play, the spreadsheet shows how Christine reads it.

I think the way to interpret these data are that the polls got 12 seats wrong, while the markets got 7 seats wrong (and the only seat that both got wrong was La Trobe, at least on preliminary results).

Update: Mark Coultan assesses the pollsters. My favourite bit:

The big loser was the Herald’s Nielsen poll, which predicted a landslide by a margin of 57 per cent to 43 per cent. The result was outside the range of the poll’s margin of error of 2.2 per cent.

The poll seriously overestimated the ALP’s primary vote. Nielsen also conducted a concurrent internet poll which showed the same result: 57-43.

Nielsen’s pollster, John Stirton, said the most likely reason was a rogue poll; the one-in-20 poll that statistically falls outside of the margin of error. This, he said, happened to all opinion polls, but it was just unfortunate that Nielsen’s had happened just before the election.

All perfectly plausible. Given how far away from the true result they were, and the fact that their last two polls were similarly wrong, Simon Jackman persuasively argues that this is implausible. In any case, hopefully it means that we won’t see any talk of ‘the maximum margin of error’ in the SMH over the next election cycle.

Advertisements
This entry was posted in Australian Politics. Bookmark the permalink.

2 Responses to Sunday-Morning Quarterbacking

  1. christine says:

    I should say: NOT how I was calling them. I relied on the AEC’s estimated two party preferred total as of 2am Australia time. Also, should be clear that all the ‘predictions’ come from Simon Jackman’s blog, and that the poll predictions were from his post on the effects of a state-by-state uniform swing.

    My basic take: the betting markets were better than a 5.9% uniform swing assumption on a seat by seat basis (except for Bennelong?), that really badly miscalled Queensland. But most of the pundits seem to have figured that out, and it was (I think) a point Simon made – that with a big swing on (as in Qld) you should probably be expecting that a couple of what seem like really long shots would come in.

    Also, whether the betting markets worked well or not doesn’t just depend on whether they called a seat right based on the 50% rule, but whether they on average got the overall probabilities right – so maybe they implicitly called one in three of the Qld long shots? But that’s work for someone who’s more than just a dabbler to deal with.

  2. Pingback: 2007 Australian political elections: Polls vs Bookmakers | Midas Oracle .ORG

Comments are closed.