Poll dancing

In a discussion about whether journalism students should know more about economics, Mark Bahnisch said this morning: “I think one thing aspiring journos need is a grasp of social and behavioural statistics and how to interpret them.” And as if proving his point comes the front page from today’s Sydney Morning Herald and Age.

Both papers give plenty of column space to a “lift” in Labor’s poll performance. Their evidence is two polls of about 1400 voters. On 20-23 April, AC Nielsen found Labor had 51% of the two-party preferred vote. On 18-20 May, they found that Labor had 54% of the two-party preferred vote.

The kicker is in the fine print. If you believe that the polls were afflicted only with sampling error*, then both polls had an 95% error margin of 2.6%. So the right way to interpret the two most recent AC Nielsen polls is:

  • 20-23 April poll: There is a 95% chance that Labor’s two party-preferred vote was between 48.4% and 53.6%
  • 18-20 May poll: There is a 95% chance that Labor’s two party-preferred vote was between 51.4% and 56.6%

Notice the overlap? Alternatively, we can do a formal test for equality of means, which shows that with a sample of 1400 voters, the difference is significant only at the 11% level, meaning that there’s an 89% chance the movement is real, and an 11% chance that it’s mere statistical noise. Most social scientists regard movements that are not significant at the 5% level with a healthy dose of scepticism.

Do journos apply a similar approach? I would hope so. For example, I’d hope that Australian political editors would not put a tip on their front page if their source told them that there was an 11% chance it was wrong. Which means they might consider being a less circumspect when spruiking badly-measured polls.

Not playing fast and loose with statistics probably sells less papers. But that’s true of all kinds of misleading things one could do in journalism. And after all, the profession has its own ethical standards. The Australian Journalists’ Association Code of Ethics begins:

Report and interpret honestly, striving for accuracy, fairness and disclosure of all essential facts. Do not suppress relevant available facts, or give distorting emphasis.

I don’t think today’s reporting meets that standard. 

* This is being very generous to the pollsters. Justin Wolfers and I argue in a paper forthcoming in the Economic Record that the polls are much too volatile to be afflicted only by sampling error. In our view, the true standard error of the poll could be several times larger.

Update: Of course, if there really was a shift, then Guy and Mark can tell you why.

About these ads
This entry was posted in Australian Politics, Media. Bookmark the permalink.

19 Responses to Poll dancing

  1. A week before this poll, both Newspoll and Morgan Poll had the parties level on a 2-party preferred basis. Given that nothing much new happened in the following week, the wide margin reported in this poll is very unlikely to be accurate.

  2. Bryan says:

    The theory of sampling error assumes the items in the sample are all definite. In the sampling of voting preference, this would be true for some of the sample, but not others (ie, at any point in time there is a probability an individual will say/vote Labor, and a corresponding probability they will say/vote Coalition). In effect, the response from these people in the sample is indefinite and changeable. The consequence is that the sampling error is significantly widened for opinion polls, when it comes to making a national prediction of how the nation would vote at a particular point in time.

  3. Bring Back EP at LP says:

    all valid and true Andrew L.

    I was quite struck by the lack of understanding of basic statistics in both parties when it came to polling in my youth and then middle age.

    I remember asking a mate of mine what use a daily poll of 4-500 was to Howard in 96 when the margin of error was nearly always greater than the swing.

    People want simple results.

    AndrewN it could be the other two have rogue polls.
    Has happened previously!!

  4. Sinclair Davidson says:

    I think you’re being a bit hard on the journos. I reckon if we set this as a question in an undergrad stats exam, many students would not answer correctly. Sure, there’s always room for improvement …

  5. derrida derider says:

    I don’t think so, Sinclair – anyone who’s passed Stats 101 would understand standard errors and hypothesis testing. Even if they couldn’t calculate that 11% for themselves, they’d understand the point about the overlapping ranges.

  6. Sinclair Davidson says:

    DD – I hope you’re right and I’m wrong.

  7. Michael G says:

    Nothing much has happened in the last week, but i think quite a few things have sunk into peoples consciences and perhaps crystalised into a change in voting intentions. I guess this is another shortcoming of polls, always slightly behind the times.

  8. Guy says:

    Good post – a point oft missed.

  9. Matt Cowgill says:

    A friend of mine who works for The West Australian reckons that a Bachelor’s degree in a particular field (Pol Sci, Economics, Science, etc) + a Grad Dip in Journalism is usually valued higher than a major in Journalism or Communications. The downside is that people following the GradDip route usually make fewer ‘contacts’ and have a harder time getting cadetships.

  10. Strangely undervalued in these days of science language but not principles, is just getting out amongst people and talking to them, watching them, listening to them, arguing with them and learning, holding a wetted finger to the wind.

    You can learn a lot by poking somethin with a stick and seeing what happens.

    There seems to be a prefered tendency to use science-y type words and processes around crap data or wrong questions.

  11. Bruce Bradbury says:

    Do journos apply a similar approach? I would hope so. For example, I’d hope that Australian political editors would not put a tip on their front page if their source told them that there was an 11% chance it was wrong.

    Well if you put it that way its not surprising at all!

  12. I’m guessing this sort of thing happens a lot?

  13. Andrew Leigh says:

    Sukrit, yes, you’re right. It’s even worse in elections, when they write stories based on 1% movements.

  14. Mark G says:

    This is more than just academic purism. If people are making critical decisions off stats, then the statistical reporting needs to be strong enough to support those decisions.

  15. Stephen L says:

    I share your frustration with the bad media commentary on these things, but the cure is not obvious. Polls can be useful if you stick a lot of them together – combining the three major polls reduces the error taken at one time, and if you’re willing to wait a bit for several rounds of polling you can get a sample which is statisically fairly reliable, although it doesn’t eliminate the other problems.

    However, if it was not for the face-value interpretations put on these polls would polling actually be done. If journalists reported “A two-percent swing recorded by the poll released today is well within the margin for error and most likely means nothing at all” then it is unlikely that Morgan would release it’s polls, or that anyone would bother paying for AC Neilson to be conducted. In which case we’d never get the number of polls needed to actually give us a useful combined sample.

    The real problem is when backbenchers in a party start to believe the nonsense interpretations journalists put on a poll, and dump a leader because of a bad result that may just be statistical noise – the problem of lack of statistical understanding amongst journalists is as nothing to the lack amongst MPs.

  16. Mark G says:

    I’ll quote from the society of Professional Journalists code of ethics (www.spj.org/ethics_code.asp). It’s line 1, under the principle “Seek truth and report it”

    Journalists should:

    Test the accuracy of information from all sources and exercise care to avoid inadvertent error. Deliberate distortion is never permissible.

    Then four lines down:

    Make certain that headlines, news teases and promotional material, photos, video, audio, graphics, sound bites and quotations do not misrepresent. They should not oversimplify or highlight incidents out of context.

    Presumably this places an obligation on editors and sub-editors to ensure that the staff they send to cover a story have adequate skills to understand and interpret the story that they cover. It’s true in business journalism and scientific journalism, and should be true in political and social journalism. In no sense should the survival of the pollster companies justify news beat-ups.

  17. Bring Back EP at LP says:

    of course if they correctly wrote the article would anyone read it?

    I remembering e-mailing the RN breakfast program when Fran Kelly then political correspondent said in a campaign there was a swing to either libs or ALP of 2% and said but this is under the margin of error.

    She has kept on repeating such errors.

  18. this topic is an old chestnut, and frankly, we’re screwed here. even if journos were given a good year-long background in statistics, journos interest in “news value” runs counter to social-scientific norms e.g., cautiously rejecting null hypotheses at p ALP 2pp yesterday, conditional on the polling data. Of course, reporting that this probability is, say, .15 will go right over the heads of journos and their readers, who love to be able to splash “a bounce” in today’s edition etc. The imperative to fill column inches (or dare, I say, blogs), runs counter to what we do in our publications…

    I canvassed some of these issues in a piece in the AusJPS V40(4), December 2005 (“Pooling the Polls Over an Election Campaign”), looking at “house effects” (biases specific to survey companies).

    the other point here is that even the stated sampling error given in the fine print is usually an underestimate, due to “design effects” (e.g., the fact that the sampling was done by clustering/multi-stage sampling, not literally a equi-weighted, random probability sample of the entire electorate). yet, almost everyone in the polling industry computes/reports standard errors using the textbook forumlae that presume a nice random sample of the electorate. then there is the question of how you elicit the 2PP (two-party preferred) proportion in a survey context, which, frankly, is an underexplored area (only a little bit of work on this by Peter Brent of mumble fame), but a source of tremendous variation across survey houses and additional-yet-unacknowledged uncertainty. put all this together, and yeah, the “margin of error” around a a published 2PP number is probably going to be considerably bigger than that reported even in the fine print.

    and by the way, its not that polls inherently suck, just that we (social scientists) can and ought to do better with the info they give us.

  19. Pingback: Spinning a narrative « Criticality

Comments are closed.