Posts by steve black

Last ←Newer Page 1 2 3 4 5 Older→ First

  • Hard News: Gower Speaks, in reply to Andrew Robertson,

    All true. I should have added “or some other factor” and lack of time could well be it. I know that doing this sort of polling is rather like corporate advertising for Market Research companies. You do it to get your company name out there, not because it makes you any money.

    Parties polling below 1% are only going to be interesting if they are going to get a seat in Parliament. But in that case then it is going to be an electorate seat, which as far as I can tell is entirely outside the scope of this sort of polling which is reported in public. So one wonders why bother to report anything about them at the “party support” level and clutter up the graph? Why bother to read meaningful ups and downs of “party support” in the tea leaves for the “rats and mice” parties?

    That would leave space to properly report the refused and undecided responses which are very important and are probably more deserving of a line on the graph. That would make a change from the same old illegible lines across the bottom for those "rats and mice".

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to Pete George,

    Ah. Thanks. Now I can focus on the interesting part...

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to bob daktari,

    I couldn’t care less for the continual news reportage of polls…. seems only to satisfy the math geeks who seem 100% in agreement that they are (seriously) flawed and cannot a soundbite make

    It's not the polls which are flawed. They are what they are. It is the analysis and reporting which is seriously bogus. They could all save a lot of money and just follow iPredict or use a random number generator to create their narrative.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to James Green,

    Hi James,

    Thanks for that. A link lets everybody else see it too, not just me with my shelf of reference books. Much obliged.

    @ Peter Calder: yes you are quite right, but it is more subtle (yeah, whatever dude maths geek stuff coming) because at very low or high percentages (<1 and >95) the confidence isn’t symmetrical about the point estimate. Which is why the proper confidence limits calculated from the binomial distribution never dip below zero or go over 100%. And it’s also the reason that you shouldn’t ever write United Future 0.1% +/- 0.2%. The properly estimated Confidence Limits won’t be positive and negative in the same amount. So you can’t just have a simple +/- with a single number.

    When somebody writes “party support is at 0.1% +/- 0.2% this is almost always a sign that they are either using an approximation to the true distribution which shouldn’t be used near the extremes (0% and 100%), or not sufficiently statistical literate.

    @ JonathanN: Bless you for putting things straight using R and the proper calculations. Am I seeing something not quite right in your confidence limits for NZF? If their “support” is estimated at 4.9% your confidence limits are outside the point estimate: 5.3% – 9.1%. Or have I missed something? I stopped after finding this, so I haven’t pasted your code into my R or gone back and checked any of the others.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to Geoff Lealand,

    Yes, and we need journalists to distinguish between polling (randomly selected stratified or quota-selected, representative samples of potential voters) and surveying (self-selected, non-randomised on-line surveys). The new guidelines released by Research Association New Zealand “New Zealand Political Polling Code” are very good.

    I’d only give the new code a bare pass. They left out the important things like Effective Sample Size, and calculating the error estimates properly. And they got the terminology backwards!

    I was a statistician with a particular interest in survey based research (in contrast to experimental research) and RANZ have co-opted the term which was the proper scientific discipline I was involved in to become the new “low quality” term. And they have elevated the term “poll” which was the “low quality” term in my time (applied specifically to political things, not the full breadth of research which is conducted by survey).

    Consider one of the key reference books in the field: Kish, Leslie 1955. Survey Sampling . It isn’t called “Polling Samping”.

    Big Brother has raised the chocolate ration again. War is Peace. Survey is Poll. Where is their sense (and understanding) of the history of the discipline?

    I'm allowed to be an old curmudgeon about these things because I'm retired and no longer have any skin in the game. The younger generation can go out wearing the Emperor's New Clothes.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to Emma Hart,

    There is one thing they could, and should, be doing which is incredibly simple: include the undecided/don’t know/wouldn’t say vote. Making it disappear isn’t “dealing with what’s there”. One of the most interesting things about that last poll, surely, was the rise in the undecided vote. These are the people who will, very probably, decide the election. If the undecided vote is, say, twice the difference between the ‘left’ and ‘right’ coalition blocks, that’s pretty damn important, AND it makes for an interesting story.

    I do not understand why it isn’t done.

    This reminds me of a similar phenomenon which used to happen in Australia when Julia Gillard was PM. The MSM routinely reported drops in her level of support as PM to a new low. What they never mentioned was that there was one person who had a consistently lower level of support for PM than Gillard. His name was Tony Abbot. It used to amaze me that the MSM didn’t get called on that “data deletion” more often. Everybody just looked the other way.

    It isn’t an accident which poll results get reported and which get unreported. It is a decision by journalists and editors. It’s that simple. That’s why we need some minimum standards for correct reporting of polls (you know: evidence based reporting) along with balance and fairness in interviews and narratives.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to Andrew Robertson,

    Here ya go: https://www.3news.co.nz/Politics/3NewsReidResearchPoll.aspx

    Unfortunately this kind of graphic may be misleading and bogus for the reasons I mentioned before, although that was more in the context of the C-B methodology.

    Now let’s consider what is happening with retirement of sitting members. The most dramatic effect is likely to be the retirement of Tariana Turia and Pita Sharples. Their electorate seats may be up for grabs. In the Maori seats it is all about the electorate seats and not the list seats. You can’t use “the last election results” or safely make assumptions like “things will be the same” because things have obviously changed. It takes a lot of work and very special interviewers to poll the Maori electorate seats accurately, and I suspect such Maori electorate polling doesn’t figure at all in the “party support” based seat counting. Yes there is sometimes overhang just to make things more complicated.

    If somebody can reassure me by revealing the Reid methodology and assumptions for assigning two seats to the Maori party for the coming election then I’m all ears. Plus of course the seat based translation for other parties. If not I call bogus and misleading on the hand waving which turns the actual data into an infographic of seats in Parliament.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks, in reply to David Hood,

    I think, because of the 5% line, if drawing virtual Parliament results, you kind of need two, side by side, with some visual indication of how likely each style of results, because the flow on effects of NZ First’s result are (I think, looking at the numbers) going to have a more significant effect on government formation than anything else. Once NZ First’s effects are dealt with, the other perturbations due to error ranges are pretty minor.

    More than two.

    The polling data revealed to the public is only the party vote ("support"). The electorate voting isn't mentioned. In fact, in their methodology section C-B note

    The interview introduction was changed in this poll to remove any reference to politics, and the weighting specifications were updated. This may impact comparability with the previous poll.

    The data does not take into account the effects of non-voting and therefore cannot be used to predict the outcome of an election. Undecided voters, non-voters and those who refused to answer are excluded from the data on party support. The results are therefore only indicative of trends in party support, and it would be misleading to report otherwise.

    So an unknown methodological change effect with previous polls? We seem to have overlooked that in the rush to narratives.

    And every commentator seems to suffer from amnesia when it comes to the leap from party support to seats in the house in their narratives. C-B tell you not to use their data for that. It is misleading to do so.

    A few people have mentioned (in one or more of the 3 discussions on this poll-fest) that in order to translate poll results into seats many assumptions have to be made about what parties do deals to not stand a candidate in certain seats in order to get certain minor parties in. So from where I sit the 5% threshold effect is but one of your worries compared to the piling up of assumptions. You would probably need to have several different scenarios depending on minor party deals and getting to the threshold. How well does a particular scenario stand up to sensitivity analysis (ie changing the assumptions)? They don't even begin to report that sort of thing.


    Note: I'm having another read through my reference library with regards to the multiply by 1.414, and re-visiting my earlier work on how much non response scenarios can make for wider confidence limits. Can't help it even though I'm retired and not all that well...

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Gower Speaks,

    Stats 101:

    Not much is happening in these polls (statistically). But it never seems to slow down the narratives. This cartoon nails it:

    https://xkcd.com/904/

    Hover over that and it adds "and also all financial analysis..." and they should add "and all political polls when the methodology section gives margins of error based on simple random samples".

    When you compare two polling periods (say NZF 4.9% this time vs last poll or some different organization's poll) you don't just see if they are within the confidence interval of one another. You multiple the margin of error by 1.414 (yes the square root of 2 -- which is the simplified approximation because this is only Stats 101) because you are comparing two percentages both of which have uncertainty. But that's just the beginning of the trail of compounding errors in the way things are analyzed.

    You need to base your comparison on the ESS (effective sample size) not the usual 1000 they report. The ESS will be smaller because of several factors including: non response (the biggie!), weighting (which introduces error variance itself), and design effects.

    Overall response rates for NZ polls have been dropping over the decades (like newspaper readership). As a researcher you are as responsible for those you did not survey as those you managed to reach. Response rates should always be made available, not only the overall one (probably below 50%) but also for each question. How misleading things can be by omitting non response has already been pointed out. And Pete George is correct in noting that non response swamps sample frame bias to do with telephone access. Two decades ago (yikes, two decades) I worked through some estimation of the level of sample frame bias when using telephone surveying in NZ (versus other methodologies like face to face). Back then mobile phones weren't the issue -- using telephone based surveying and getting research with quality specs was the issue.

    Wyllie, A., Black, S., Zhang, J.F. & Casswell, S. (1994). Sample Frame Bias in Telephone Based Research in New Zealand, New Zealand Statistician, 29, 40- 53.

    As is usual, technology has changed everything but really changed nothing. Now the question has simply shifted to the sample frame bias associated with not including mobile phones. If the market research companies are up to the mark they will have included questions in their face to face surveys to find out about landline vs mobile use and can do the calculations if they choose to.

    I've also done simulation work (long ago also) which considers levels of non response and what effect it has on the true confidence limits for your percentages. The results are so depressing that such appropriate widening of the confidence limits just never gets used.

    I thought the new standards for reporting and the MediaWatch segment might get a little bit further than it did. I remain disappointed.

    sunny mt albert • Since Jan 2007 • 116 posts Report

  • Hard News: Climate, money and risk, in reply to Russell Brown,

    The Edit function is only available for a certain number of minutes after you make a post. Naturally, I find my typos just after whatever that number of minutes is set to.

    Maybe you need to remind us how long the ability to Edit lasts?

    sunny mt albert • Since Jan 2007 • 116 posts Report

Last ←Newer Page 1 5 6 7 8 9 12 Older→ First