Posts by steve black

Last ←Newer Page 1 2 3 4 5 Older→ First

  • Hard News: A wretched editorial, in reply to Dean Wallis,

    My wife bought the SST and I saw it this morning. I started to read the editorial but soon stopped. I did not like what I was reading in terms of content or writing style.

    Look left and the snipped extracted is "The interview smacked of political opportunism". I say the same is true of the editorial. Kettle, meet pot.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Mr Mark,

    I’ve simply argued that, throughout the last 18 months of Fairfax-Ipsos Polls, we have a striking contrast between the climate of opinion suggested by (1) the Party Support results (based solely, as they are, on the Decided and those likely to vote) and (2) the ‘Do you favour a Change of Government’ measurement (based on the entire sample). A striking contrast all but ignored by Fairfax journos and the wider MSM (and, therefore, hidden from the wider public).

    You have outed those reporting polls on failing to tell the whole story. Good work. It reminds me how Tony Abbott (in opposition) was consistently rated lower as preferred Prime Minister than either Julia Gillard or Kevin Rudd. But his rating just wasn’t reported. All the focus was on a “shocking new low” for Gillard or Rudd.

    However,

    My alternative model #1 is that 8% of those polled answered “I favour a change of Government” AND say they will give National their party vote. That’s because they want a change of the rag tag band of small parties supporting National.

    My alternative model #2 is that 9% of those polled answered “I favour a change of Government” AND say they will give National their party vote. That’s because they want a change but not to a Labour led government.

    My alternative model #3 is that 3% of those polled answered “I favour a change of Government” AND say they will give National their party vote. That’s because they are perfectly able to hold apparently contradictory thoughts in their head.

    My alternative model #4 is that 10% of those polled answered “I favour a change of Government” AND say they will give National their party vote. That’s because they wish there was some alternative but don’t see one.

    My alternative model #5 is that 6% of those polled answered “I favour a change of Government” AND say they will give National their party vote. That’s because they don’t like National but they want to make sure the Greens have no influence in Parliament.

    I simply maintain that you can’t exclude my models on the grounds that they don’t fit the data. And I can’t choose among them either. We haven’t got the data at the detailed level needed. At this point further arguments about which is the better model stop being evidence based. But the different models (yours included) make different predictions about how the elections might go, and what the effect of different “agreements to cooperate” by different parties might be during the campaign.

    note: I haven’t seen the exact wording of the questions the Ipsos poll is using and there may be 11 more models I could come up with if I looked at the wording and question order.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Hope and Wire,

    I didn’t even know of it. I must be out of the loop. I’ll have to watch it On Demand.

    Is it weekly or on following nights?

    Just pretend I'm an alien recently landed from another galaxy who knows nothing of your strange TV customs.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Andrew Robertson,

    People are much less obsessed with politics than those who comment on politically-orientated blogs.

    Oh please say it is not so... People who don't care and just get on with life? :-)

    Glad you handled that question. I saw it and left the room quickly. :-)

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Mr Mark,

    Well, as the bloke who kicked the whole thing off with this (admittedly very brief) analysis of the Left-leaning (or,at the very least, anti-National Government) proclivities of the Undecideds and those unlikely to vote…

    Alas, we really need the raw data to make stronger inferences. And in fact we really need data from a panel (or longitudinal study) to know how individuals move in or out of being decided, wanting a change, etc. There are too many different causal models which fit the correlations observed from outside the system (I know them as "ecological correlations" but the name depends on what discipline you come from). We can't rule out some models based on the evidence we have, so we will be left with multiple interpretations.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided,

    Thanks again Andrew.

    When I reread the caution in your methodology about using it to predict the outcome of the election, I also thought about a certain irony:

    Polling this far out can’t be used to predict the outcome of elections accurately if campaigning works. Especially if it gets disenfranchised young people out there being active and voting.

    Which might raise some interesting questions about whether campaigning works and actually shifts voter behaviour (versus say, awareness).

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Andrew Robertson,

    Actually we’ve only seen it dip below 800 a couple of times. Those unlikely to vote are still asked the party vote question, but excluded when calculating the party support results.

    I used the numbers on p8 and yes below 800 twice in that series. Excuse me just giving a vague handwaving very rounded range. :-) Was this the right place to get the counts from?

    840 820 834 767 755 813

    So 813 (the most recent number) is those people who expressed a party voting preference (sometimes with a second prompt) AND were likely to vote? And there were other people who were asked and answered, but excluded from the table because they weren’t likely enough to vote?

    Are these numbers unweighted or weighted? Some MR companies, with Heylen in their ancestry, report unweighted bases (sample sizes) with weighted percentages. Excuse me asking and causing more eyes to glaze over.

    And thanks for the reference to that book Andrew. $100 for the eBook version. I’m saving up my coins.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Sacha,

    so how many folk are you ditching from your denominator?

    It looks like the denominator is down to between 700 and 800 per sample based on party supporters. That would widen the margin of error a little bit compared to 1,000. For example

    1,000 and 50% support (95% confidence level) = 士3.2%
    800 widens to 士3.5%
    700 widens to 士3.8%

    Correcting for this effect of reduced sample size in a particular comparison is an improvement, but that it isn’t as important as some other adjustments (like using the correct error when comparing the change in party support between two survey periods). If you are only looking at a smaller group (say male labour supporters) then using the correct (much reduced) denominator will make a bigger difference. All of those things (and more) should be handled by the software and the person reporting the analysis.

    Note I’m not sure how the reduction of 1,000 to say 800 based on “party supporters” interacts with the likelihood to vote filter. Is it implicitly applied by that stage? I didn’t see how that works on a quick read.

    Note also that these methodological differences concerning who is eligible to be in a given analysis, plus question wording, may be sufficient to account for the "house effects" between the different research companies. I don't know if it is, or if there are other factors at work.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Andrew Robertson,

    I’m sure Winston Peters wanted to ban polls for a certain amount of time before the election. This is one of the few policies of his I would like to try.

    On an apparently unrelated issue in another country, I was just looking to see whatever happened with the vote on Scottish Independence. It is one of those things where I suddenly go “whatever happened to that?”. It turns out that it doesn’t happen until September and there is lots of polling leading up to it. Sound familiar? But they do seem to be able to give the undecided results at Wikipedia:

    https://en.wikipedia.org/wiki/Opinion_polling_for_the_Scottish_independence_referendum,_2014

    Look at those undecided percentages bounce around. Somebody with a Media Research bent might get a researcher to compare reporting standards on Scottish Independence and what gets on the nightly news and in papers, versus reporting standards here and what gets on the nightly news and in papers. :-)

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Decidedly Undecided, in reply to Andrew Robertson,

    Excellent news. Thanks Andrew. You really seem to be doing it differently from most.

    And yes indeed, what we are after is a proper representative sample. There are many more ways of obtaining low quality samples then just a low response rate. :-)

    Slight correction (hey, being a statistician means never having to say you are certain). When I said “Market Research companies only take the first error term…”, I should really have said that they only report error margins which reflect the fist error term assuming simple random sampling. I can’t know what they calculate out the back since I left the industry long ago. But it is easy to check the error margins they do report, and see that these line up with what you get from the table (hey, I’m old and I have dozens of texts with many pages of printed tables in the back) or your handy computer program (based on the normal approximation).

    Oh, and wouldn’t it be nice if they used the adjusted sample size in those margin of error calculations when the question they are reporting on doesn’t include all respondents? Say, having removed the undecided voters? Or only considering those who voted in the last election? Or in breakdowns by gender, age group, etc? Rather than just telling us what it is for the whole survey in that time period. As I believe the poll of polls attempts to do…

    sunny mt albert • Since Jan 2007 • 116 posts Report

Last ←Newer Page 1 3 4 5 6 7 12 Older→ First