Hard News: Decidedly Undecided
142 Responses
First ←Older Page 1 2 3 4 5 6 Newer→ Last
-
Something that didn't come up in discussion on the show is that, while the NZ Polling Code is binding on all members of the Research Association, it has no jurisdiction over media organisations.
In the research industry the clients own the report and all the data. The research company could produce everything required by the Code in their report, but they can't release anything unless their client approves.
-
Maybe the high number of undecideds suggest that we should take poll result with a large spoonful of salt?
How about the proposition that the number of National supporters in most polls stayed about the same over 6 years or more? Therefore, if this is so, the apparent rise of National support arround 50%+ is a statistical anomaly. -
Watching it now. Great work Russell.
-
Excellent opening night. Big ups to all participating, really like Toi Iti and his panel. But of course I dig this polling stuff big time.
In the current form, the data don’t really provide any evidence for extrapolation to the election.
I don't really see much that could do that strongly anyway. We're talking about a time series in which the main participants compete fiercely, reacting to the polls themselves. But prediction isn't the only purpose in this information. Much more prominent is to understand what's going on, to explain what we saw happening, to even know what happened, so far as overall public opinion is concerned.
-
Craig Ranapia, in reply to
Something that didn't come up in discussion on the show is that, while the NZ Polling Code is binding on all members of the Research Association, it has no jurisdiction over media organisations.
That's true, in the sense that an act of industry self-regulation has no jurisdiction over how that data is reported. But last time I looked, all broadcasters are subject to the relevant Codes of Broadcasting Standards, and the provisions of the Broadcasting Act.
-
Apropos the first segment of gender and ethnicity in journalism, I wonder whether the Radio NZ restructure will bring any more brown voices to the airwaves? Currently, there's Eru Rerekura for 5 minutes twice a day, and another specialist show on Sunday afternoon, and that's about it.
-
steve black, in reply to
Not an “anomaly” sensu stricto . It might be better to say that the results may just be within the expected range of variation given the survey methodology. When you take more than a simple random sample model of errors into account your more complete error model will widen the confidence limits. Market research companies only take the first error term from simple random sampling into account, and leave out all the other factors. Including possibly, a 50% refusal rate to take the survey (the survey non response rate). Never mind the non response (undecided or refuse) on specific questions. What about the vast number you failed to contact or didn’t want to take your survey? If those people are different in any relevant way from a true representative sample then your simple error estimates are optimistic.
Anybody from a research company want to tell us what your overall response rate is please? That is vital to know. As important as the percentage of undecided voters.
I like your null hypothesis. My alternative suggestion is that the “swings” from week to week (within a poll) or between polls, provide a better measure of true uncertainty in the system. The system in this case is what is happening in voterland plus how we conduct and analyze our surveys.
In other words, rather than explain the “swings” treat them all as examples of how much true variation is there. At the moment the pollsters and commentariat tend to “explain” changes based on thin air – because you don’t know why people change their voting ideas unless you have collected detailed data at the individual level regarding just that. Making up stories about why changes happen is just like the business commentators who make up stories about why share values went up or down yesterday. It isn’t based on real data, i.e. asking people why they bought or sold at the price they did.
In both cases (election polling and share market reporting) commentators simply look for a recent event which might be top of mind and “explain” changes using that. Maybe the stories are true, maybe not. But we don’t know.
Russell, my wife called me in to see your show but it was just winding up. I am very glad it is still available to view.
-
Thomas Lumley, in reply to
That's one possibility, but we simply don't know -- at least, not based on the published polls. It could be that all the net new undecideds will go back to voting Labour (in which case the increase in National's vote will disappear) or it could be that they will not vote (in which case the increase won't disappear) or it could be that they will vote for someone else when it comes to it. Or any mixture.
If the increase in undecideds is new and matters (and it probably is and does), that means we don't have any historical data to decide what they will do at the election. Simple cross-sectional samples won't answer that sort of question. You'd need more detailed interviews -- qualitative research, as well as quantitative.
-
steve black, in reply to
Much more succinct than my raving, Thomas. Thank you.
-
Russell Brown, in reply to
That’s one possibility, but we simply don’t know – at least, not based on the published polls. It could be that all the net new undecideds will go back to voting Labour (in which case the increase in National’s vote will disappear) or it could be that they will not vote (in which case the increase won’t disappear) or it could be that they will vote for someone else when it comes to it. Or any mixture.
Including that they might drift over the the Greens or Internet-Mana, which would fit a picture of part of the Left vote now weighing up multiple choices while the Right remains largely and firmly with National.
-
Enjoyed the show. It really has come together, hasn't it? Is it me, though, but in each iteration the theme tune for "Media " seems to be getting slightly more ominous and darker...
-
BenWilson, in reply to
Yes, and it's worth noting that decideds are not guaranteed to stay decided for what they decided, or even to stay decided at all.
-
BenWilson, in reply to
the Right remains largely and firmly with National.
Well, and there's NZF, whose sudden change in apparent support was a big story for the last election, and could well be again.
-
izogi, in reply to
In other words, rather than explain the “swings” treat them all as examples of how much true variation is there. [--snip--] In both cases (election polling and share market reporting) commentators simply look for a recent event which might be top of mind and “explain” changes using that.
I've sometimes wondered similar things about currency reporting, even though it's probably less contriversial. We get nightly reports about how much the NZ$ went up or down against various countries, but it's rarely reported for anything other than since yesterday, so it's always fluctuating seemingly randomly up or down, and I have trouble seeing the relevance that could be obtained by looking at longer term trends. (Maybe I'm too self-centred in my perspective?) Political polling reports are often present to the masses in a similar way: The only story treated as being worth reporting seems to be how much it's fluctuated since the previous poll.
-
Sacha, in reply to
The only story treated as being worth reporting seems to be how much it's fluctuated since the previous poll
Far easier to do that than actually provide intelligent analysis. It's like we have dogs reporting "arf, the ball went over there. woof, now it's in the air".
And the currency stuff is almost just airtime filler - as if anyone who really needed to know would get that info from the telly over other sources.
-
I think there may be one thing we can say about undecideds. They are more likely to found among the supporters of smaller newer parties.
Small parties all rely on swing voters, much more so than N or L, who's voters are more likely to know who they'll vote for further out from the election.
While not the same thing statistically, you have to be undecided before you swing.
-
BenWilson, in reply to
I have trouble seeing the relevance that could be obtained by looking at longer term trends.
Which is curious, if you think about it, because the long term trends are what generates by far the biggest part of where we are, and what happened last night is a tiny contributor. I think the interest comes down to the purpose. If you want to predict the future, then in a highly competitive domain, it's really difficult and short term advantage might be all you can rely on, since the autocorrelation fades off quite rapidly. But if you are hoping to influence the future, particularly at a policy level, or you even just want to come to a decision about what you think should be done about it, the long term trends are much, much more valuable than microanalyzing minor fluctuations.
The currency news borders on hilarious for this reason. You can have articles side by side simultaneously reporting the currency went up for x reason and the currency went down for y reason (and for the double irony, sometimes x=y), especially since the currency can be both up and down at the same time if you are comparing it to other currencies.
But in the long term, the reasons why there might be a sustained drop in a currency can certainly be robustly correlated to other factors at times.
-
Russell Brown, in reply to
While not the same thing statistically, you have to be undecided before you swing.
As the actress said to the bishop.
-
BenWilson, in reply to
Small parties all rely on swing voters, much more so than N or L, who’s voters are more likely to know who they’ll vote for further out from the election.
It's really hard to know if that's true. In the case of the Green Party, it would seem less correlated to undecided levels recently, which is some kind of evidence that they don't really rely on swing voters. My story about that is that it's quite an ideologically focused party, so decisions to vote for it are taken in a different way to decisions between purely pragmatic parties. But that's just my story.
-
The research groups' representatives gave a lot of weak excuses and no plans to change. The statement that undecided vote level is a bit higher than it was in "2012 and 2011" is downplaying the main point of Political Scientist's complaints that the level of undecided voters has risen markedly over the last 18 months.
-
Andrew Robertson, in reply to
Anybody from a research company want to tell us what your overall response rate is please? That is vital to know. As important as the percentage of undecided voters.
I'm planning a blog post about this. All surveys should report response rates. The problem is, NZ has no agreed upon formula or set of call outcomes codes (like the AAPOR does - which is what I use). I've seen some VERY dodgy response rate calculations used on VERY public NZ surveys.
I would like all polls to report response and refusal rates in the same way, using an agreed formula.
One thing to keep in mind is that response rate is not always an indicator of sample quality, as I point out here.
-
Thanks Russell for the opportunity to be on this show - and everyone else for their feedback. Andrew's certainly been fighting the good fight to date!
One issue we discussed was TV3's reporting of NZ First's poll rating of 4.9% as definitely being below the threshold. That's statistical nonsense, as I've explained on my blog here. If you don't want to go the blog, the long and the short of it is that, if a given party polls 4.9% in a representative survey of n=1000, the probability that they're really on 5.0% or greater is 49.6%.
-
Gavin White, in reply to
Andre - it's worth remembering that the Political Scientist Blog's conclusions were based on nine surveys. Both Andrew and I have looked at our respective polls over the same period (20+ polls in both cases) and come to the same conclusion - undecideds have increased, but the increase is by no means 'marked'.
In our case it's increased by about 2% on average between mid 2012 and in the first half of 2014. It's an interesting piece of information, but not enough to explain the movements in the party vote.
-
Sacha, in reply to
Thank you both for generously sharing your professional knowledge.
-
steve black, in reply to
Excellent news. Thanks Andrew. You really seem to be doing it differently from most.
And yes indeed, what we are after is a proper representative sample. There are many more ways of obtaining low quality samples then just a low response rate. :-)
Slight correction (hey, being a statistician means never having to say you are certain). When I said “Market Research companies only take the first error term…”, I should really have said that they only report error margins which reflect the fist error term assuming simple random sampling. I can’t know what they calculate out the back since I left the industry long ago. But it is easy to check the error margins they do report, and see that these line up with what you get from the table (hey, I’m old and I have dozens of texts with many pages of printed tables in the back) or your handy computer program (based on the normal approximation).
Oh, and wouldn’t it be nice if they used the adjusted sample size in those margin of error calculations when the question they are reporting on doesn’t include all respondents? Say, having removed the undecided voters? Or only considering those who voted in the last election? Or in breakdowns by gender, age group, etc? Rather than just telling us what it is for the whole survey in that time period. As I believe the poll of polls attempts to do…
Post your response…
This topic is closed.