Hard News by Russell Brown

Read Post

Hard News: Decidedly Undecided

142 Responses

First ←Older Page 1 2 3 4 5 6 Newer→ Last

  • BenWilson, in reply to Bart Janssen,

    By the way there are several countries where polls are banned for a period prior to the election date without any collapse in the society.

    Yes, up to 10 days in most cases. Is this what you're suggesting? That's quite different to:

    Given that, I’d argue they should be banned.

    Which sounded a bit more total.

    Auckland • Since Nov 2006 • 10657 posts Report Reply

  • Andrew Robertson,

    I'd love to try banning polls as an experiment - to see what influence this would have on politicians' claims during the banned period about where their party is polling (based on 'internal polls') and what the public think.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • steve black, in reply to Andrew Robertson,

    I’m sure Winston Peters wanted to ban polls for a certain amount of time before the election. This is one of the few policies of his I would like to try.

    On an apparently unrelated issue in another country, I was just looking to see whatever happened with the vote on Scottish Independence. It is one of those things where I suddenly go “whatever happened to that?”. It turns out that it doesn’t happen until September and there is lots of polling leading up to it. Sound familiar? But they do seem to be able to give the undecided results at Wikipedia:

    https://en.wikipedia.org/wiki/Opinion_polling_for_the_Scottish_independence_referendum,_2014

    Look at those undecided percentages bounce around. Somebody with a Media Research bent might get a researcher to compare reporting standards on Scottish Independence and what gets on the nightly news and in papers, versus reporting standards here and what gets on the nightly news and in papers. :-)

    sunny mt albert • Since Jan 2007 • 116 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    everyone deserves a holiday :)

    Ak • Since May 2008 • 19745 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    Everyone is asked who they would vote for, even if they’re unlikely to vote. However those less than ‘very’ or ‘quite’ likely to vote are filtered out at the analysis stage

    So you're really only reporting on strongly-decided potential voters. What proportion of the total is that?

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson,

    No, weakly decided likely voters are included too.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    I'm confused. How does very or quite = weak?

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson,

    Sorry – I probably wasn’t very clear.

    Very or quite likely refers to likelihood to vote. Anyone else is unlikely to vote, so is not included in the party support analysis.

    Weakly decided refers to those who haven’t made a decision, but say they are likely to vote for a particular party. They are included in the party vote analysis.

    So you can be weakly decided, but still likely to vote. Which, as it happens, is me! Sometimes. :)

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    words confuse

    Ak • Since May 2008 • 19745 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    so how many folk are you ditching from your denominator?

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson, in reply to Sacha,

    If you check out Page 7, Question Order and Wording, that might help clear it up.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    ta. Hang on, that doesn't say why the only intent counted is very or quite likely, or what the other options were.

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson, in reply to Sacha,

    It differs between each poll - in the link above you can see how much the base reduces from the total.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Andrew Robertson,

    The only intent counted is very or quite likely to vote - the others are quite or very unlikely to vote, or unsure.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    For Party Support, percentages have been rounded up or down to whole numbers, except those less than 5%, which are reported to 1 decimal place. For all other figures percentages have been rounded up or down to whole numbers except those less than 1%, which are reported to 1 decimal place.

    Interesting. I'd recommend also reporting party support rounded to one place below 5%. Especially given they are less accurate than for parties above that level.

    Ak • Since May 2008 • 19745 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    ah, so no neutral. thanks

    Ak • Since May 2008 • 19745 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    as you say,

    Note: The data does not take into account the effects of non-voting and therefore cannot be used to predict the outcome of an election. Undecided voters, non-voters and those who refused to answer are excluded from the data on party support. The results are therefore only indicative of trends in party support, and it would be misleading to report otherwise.

    My thoughts: what a shame the client is not prohibited from misleading except by their own morality and the lax accountability of their self-governed profession.

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson, in reply to Sacha,

    Sorry I don't get what you mean there.

    I'm pretty much against reporting party support percentages to 1dp, because that suggests a level of precision that no poll can provide. Actually, we've had people tell us that we shouldn't even report to whole numbers - but provide only ranges. If we did that though, we couldn't convert to seats and it would be hard to tell how close things are under MMP.

    The only reason we report <5% parties to 1dp is to show how close they might be to the 5% threshold.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    The only reason we report <5% parties to 1dp is to show how close they might be to the 5% threshold.

    Though doesn't that play into the example you guys discussed on MediaTake?

    Ak • Since May 2008 • 19745 posts Report Reply

  • Sacha, in reply to Andrew Robertson,

    I'm pretty much against reporting party support percentages to 1dp, because that suggests a level of precision that no poll can provide

    same

    Ak • Since May 2008 • 19745 posts Report Reply

  • Andrew Robertson, in reply to Sacha,

    I don’t think so. The problem there was interpreting that NZ First was ‘out’ because they were on 4.8 or 4.9%.

    In our May poll we had NZ First on 4.8%, so we reported two alternative seat conversations – one where they cross the threshold and one where they don’t. Both alternatives were reported in the media story too.

    Wellington • Since Apr 2014 • 65 posts Report Reply

  • steve black, in reply to Sacha,

    so how many folk are you ditching from your denominator?

    It looks like the denominator is down to between 700 and 800 per sample based on party supporters. That would widen the margin of error a little bit compared to 1,000. For example

    1,000 and 50% support (95% confidence level) = 士3.2%
    800 widens to 士3.5%
    700 widens to 士3.8%

    Correcting for this effect of reduced sample size in a particular comparison is an improvement, but that it isn’t as important as some other adjustments (like using the correct error when comparing the change in party support between two survey periods). If you are only looking at a smaller group (say male labour supporters) then using the correct (much reduced) denominator will make a bigger difference. All of those things (and more) should be handled by the software and the person reporting the analysis.

    Note I’m not sure how the reduction of 1,000 to say 800 based on “party supporters” interacts with the likelihood to vote filter. Is it implicitly applied by that stage? I didn’t see how that works on a quick read.

    Note also that these methodological differences concerning who is eligible to be in a given analysis, plus question wording, may be sufficient to account for the "house effects" between the different research companies. I don't know if it is, or if there are other factors at work.

    sunny mt albert • Since Jan 2007 • 116 posts Report Reply

  • Hebe,

    I am not a polling nerd! I'm saving the rest to read until my brain has rebooted.

    Great show Russell and everyone else; this looks like it will settle into a fine groove: informative without po-facing.

    Andrew and Gavin: thanks for your insights into the trade. It's a minefield of worms.

    Christchurch • Since May 2011 • 2899 posts Report Reply

  • Gavin White, in reply to Sacha,

    Like Andrew I'm generally strongly against the reporting of polls to 1dp - it implies a level of precision that we just can't deliver.

    There is a case, however, for reporting parties below 5% to 1dp in some circumstances. If we round to 0dp, then parties that poll 4.5% will be shown as over the threshold.

    People who aren't familiar with stats might not know that the margin of error shrinks as the survey percentage moves away from 50%. For a sample of n=1000, while the margin of error is +/-3.1% for a 50% figure, if the survey percentage is 4.5%, the margin of error shrinks to +/- 1.3%.

    Consequently, if a party polls 4.5%, then we're saying that 19 times out of 20 their actual vote lies between 3.2% and 5.8%. That's a bell curve too - I don't have an appropriate calculator in front of me, but it's important to remember that a lot of those 19 occasions should be a lot closer to 4.5% than either 3.2% or 5.8%.

    Compare that with a party that polls 5.49% (which, like 4.5%, rounds to 5%). 19 times out of 20 that party's vote should be between 4.08% and 6.9%. Again, if you bear in mind the bell curve idea, it's more likely that the 5.49% party is over the threshold than that the 4.5% party is.

    When you get down to parties polling below 1%, then it's more sensible to report them to 1dp. If a party polls 0.6% (United Future's vote at the last election), then the margin of error in a poll of n=1000 is +/- 0.5%.

    If you're interested in the stats on this, and also some early 2014 polling on the Conservatives and the Internet party (before them joining with Mana), I wrote about that here. I had a grump there at the tendency to describe parties as being 'below the margin of error' - as that blog discusses, that's actually impossible.

    Wellington • Since Jul 2014 • 16 posts Report Reply

  • Andrew Robertson, in reply to steve black,

    Actually we've only seen it dip below 800 a couple of times. Those unlikely to vote are still asked the party vote question, but excluded when calculating the party support results.

    Yes, I expect these methodological differences may contribute to some of the differences we see between polls. And sampling approach, and sampling success rates, and interviewing standards, and the demographic make-up of interviewers, and intereview durations, and weighting schemes, and shift days/times, and sample size, and area stratification, and success in identifying new telephone number banks....

    Any pollster, ever, who tells you their sample is perfectly representative either doesn’t know what they’re doing, or is lying. Yes - the data are flawed, and they have been since the dawn of population surveys. This is nothing new. No pollster can get a representative sample of eligible voters. It’s simply not possible.

    The pollster’s job, in my view, is to try to understand why they can’t, and to attempt to disentangle the signal from the noise. It’s the job of a good pollster to spend hours thinking about sources of error, and considering ways to reduce it, cancel it out, or otherwise adjust for it. They can’t always get it right, but that’s the nature of measurement in a context where there are so many variables.

    Wellington • Since Apr 2014 • 65 posts Report Reply

First ←Older Page 1 2 3 4 5 6 Newer→ Last

Post your response…

Please sign in using your Public Address credentials…

Login

You may also create an account or retrieve your password.