Hard News by Russell Brown

Read Post

Hard News: Poll Day 2: Queasy

105 Responses

First ←Older Page 1 2 3 4 5 Newer→ Last

  • David Hood,

    I was just reading Tim Harford's high accessible piece on "big data" in FT Magazine, but while it is in theory about big data it includes good discussions of polling biases and other media-stats relevant issues:
    http://www.ft.com/intl/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html#axzz2xRsgyMWe

    Dunedin • Since May 2007 • 1445 posts Report

  • Russell Brown,

    Just had a good conversation with Patrick Gower. Assuming that 2Talk hasn't failed to record the phone call, I'll write it up in some detail.

    Auckland • Since Nov 2006 • 22850 posts Report

  • Sofie Bribiesca, in reply to Russell Brown,

    I’ll write it up in some detail.

    Some?

    here and there. • Since Nov 2007 • 6796 posts Report

  • BenWilson, in reply to David Hood,

    I tend to think the crowd-sourced corpus tend to work better for assembling facts than assessing opinion, as they are two susceptible to where the crowd is coming from.

    Yes, I'd agree. I guess there's a blurry line between fact and opinion when it comes to "did he show negative body language?", "does the article disparage her?", etc. Break it down to sentence or expression counts, and maybe it's more fact based, so "how many times did he frown/shake head/interrupt?", "how many sentences were disparaging?". But of course it's also more work.

    Well, you must have plenty of time on your hands, being a student and all.

    Full time student, full time caregiver. I could probably give up the time I spend squeezing out a post between study stints, just so I could not just watch the news, but actually pore over it, counting Gower's wry smiles. $10 would just be the icing on the cake!

    Auckland • Since Nov 2006 • 10657 posts Report

  • cindy baxter,

    Coming into this conversation very late, but did anyone else listen to Media Watch on Sunday? A good piece about polling and the new Code of Conduct for media around reporting on polls, which I don't think TV3 has followed with its own poll.

    But also an interview with Gavin White on the Say It blog who has gone all the way back to 1999 and looked at what the polls said vs what actually happened.

    16 had National too high, while 3 had them too low. The most any company had underestimated National's vote by was 2%, while the most a company had overestimated National's vote by was 9%. One poll has had National's vote above their actual vote by more than the margin of error at three of the last five elections.

    auckland • Since Nov 2006 • 102 posts Report

  • Bart Janssen, in reply to cindy baxter,

    more than the margin of error

    arggh I am not a statistician but this phrase drives me nuts.

    The number quoted by all the news media as a margin of error is a half arsed translation of a statistical measure. If I understand it correctly, what they quote is the 95% confidence interval IF a party had 50% of the vote.

    Even if the party had 50% of the vote that means by chance once in every 20 surveys the number would be wrong by more than the "margin of error" and it is entirely possible, although unlikely, for multiple surveys in a row to be wrong by that much or more.

    If a party has anything other than 50% the number quoted is worse than meaningless because it implies there is some meaning where there is not.

    Personally I'd be quite happy is political surveys were banned altogether and if they must be done then those reporting them need to learn to use correct language when quoting the numbers and anyone caught using sloppy language could be ...

    Auckland • Since Nov 2006 • 4461 posts Report

  • Sofie Bribiesca, in reply to Bart Janssen,

    Personally I’d be quite happy is political surveys were banned altogether and if they must be done then those reporting them need to learn to use correct language when quoting the numbers and anyone caught using sloppy language could be …

    ...fired after explaining why they used wrong language. I think people who use the word, written or spoken for a job needs to do their job properly, or we may as well live in a place of sloppy....

    But also. I want facts. I want to learn truth and have knowledge. I want to expand my brain, my awareness, I don't like bullshit. I hate that bullshit is becoming a staple diet of information.

    here and there. • Since Nov 2007 • 6796 posts Report

  • Pete George, in reply to cindy baxter,

    cindy, Gavin's research is interesting but a couple of points on it.

    Polls don't estimate what a vote will be on election day, they try to accurately measure public opinion (and as Bart says with statistical margins error with certain confidence levels) on the days they poll, which are never on election day.

    Last election it appears that support for National dropped in the last couple of weeks of the campaign, if so polls before the election wouldn't reflect the actual result but opinion at a point prior to that.

    And polling companies continually try to improve - in the latest Colmar Brunton poll they state: "The interview introduction was changed in this poll to remove any reference to politics, and the weighting specifications were updated. This may impact comparability with the previous poll".

    So you can't take historical polls not on election days as indications of possible inaccuracies now.

    Dunedin • Since Dec 2011 • 139 posts Report

  • BenWilson, in reply to Bart Janssen,

    If a party has anything other than 50% the number quoted is worse than meaningless because it implies there is some meaning where there is not.

    Well, the meaning is not clear, if what you say is true about what they are doing with the numbers*. Presumably there is a way to map the errors they give back onto the original confidence intervals.

    *can you support this with a link? I wouldn't be surprised, but I'd genuinely like to know what their methods are. If the 95% confidence interval is just (for example) scaled up by the relative proportion then finding out the real 95% confidence interval is just a matter of scaling down. Annoying to have to do it, but helpful when trying to understand reported stats.

    Auckland • Since Nov 2006 • 10657 posts Report

  • Michael Homer, in reply to BenWilson,

    It’s just the standard binomial proportion confidence interval p ± z * sqrt(1 / n * p(1 – p)). If you know p (the reported proportion) and n (the sample size) you can calculate the confidence interval for any z (“number of standard deviations” – 1.96 for a 95% interval) you like.

    Wellington • Since Nov 2006 • 85 posts Report

  • Andrew Robertson, in reply to BenWilson,

    Hi Ben and Bart

    The poll report on the Colmar Brunton website provides the random sampling margins of sampling error for results of 50%, 10%, and 5%. Roy Morgan do this in table form as well on their website (or at least did when I last looked).

    Also, a chart in the Colmar Brunton report graphically displays that margins of sampling error for each individual party vote result at each poll, and shows where results have changed significantly since each previous poll. You can see, for example, that National has a larger margin of error than the likes of the Green Party or NZ First.

    Wellington • Since Apr 2014 • 65 posts Report

  • Sacha, in reply to Andrew Robertson,

    Thanks, Andrew. Do any of the polling companies publish graphs of their results with error bars?

    Ak • Since May 2008 • 19745 posts Report

  • Sacha, in reply to Pete George,

    So you can’t take historical polls not on election days as indications of possible inaccuracies now.

    Because one agency claims to have improved recently does not invalidate a longstanding pattern across all of them. You did actually read Gavin's findings?

    Ak • Since May 2008 • 19745 posts Report

  • Andrew Robertson, in reply to Sacha,

    Hi Sacha

    Yes, that's what Colmar Brunton do.

    Wellington • Since Apr 2014 • 65 posts Report

  • Bart Janssen, in reply to BenWilson,

    *can you support this with a link? I wouldn’t be surprised, but I’d genuinely like to know what their methods are.

    The stats chat article I linked on the first page talks more about it. And I think Michael and Andrew have it right.

    Auckland • Since Nov 2006 • 4461 posts Report

  • cindy baxter, in reply to Pete George,

    thanks Pete, good point.

    But the code of conduct is worth looking at.

    auckland • Since Nov 2006 • 102 posts Report

  • Russell Brown,

    Auckland • Since Nov 2006 • 22850 posts Report

  • Sacha, in reply to Andrew Robertson,

    Thanks. May pay to clarify how you know that :)

    Ak • Since May 2008 • 19745 posts Report

  • Andrew Robertson, in reply to Sacha,

    Heh... Well, it's written underneath the graph. :)

    Wellington • Since Apr 2014 • 65 posts Report

  • Sacha, in reply to Andrew Robertson,

    which one? and is there anywhere I can easily get the main numbers and intervals into Excel from.

    Ak • Since May 2008 • 19745 posts Report

  • Andrew Robertson, in reply to Sacha,

    The party vote chart. I can email you the %s and 95% CIs. (The is the same Sacha I know from Twitter, right?)

    Wellington • Since Apr 2014 • 65 posts Report

  • Sacha, in reply to Andrew Robertson,

    yes, thanks

    Ak • Since May 2008 • 19745 posts Report

  • Sacha,

    I've posted my graph on the later thread.

    Ak • Since May 2008 • 19745 posts Report

  • Andrew Robertson, in reply to Sacha,

    Oh I thought you were going to try doing that for the seat count.

    Wellington • Since Apr 2014 • 65 posts Report

  • Tom Semmens, in reply to Russell Brown,

    Jesus. That’s very, very poor. Audrey Young’s usually better than that.

    Are you sure about that? Another day, another story that reads like a National party press release.

    Sevilla, Espana • Since Nov 2006 • 2217 posts Report

First ←Older Page 1 2 3 4 5 Newer→ Last

Post your response…

This topic is closed.