Speaker: ‘Kiwimeter’ is a methodological car crash and I still can’t look away
88 Responses
First ←Older Page 1 2 3 4 Newer→ Last
-
Sacha, in reply to
I’m getting the feeling that this guy is a freakin’ amateur.
certainly avoided sounding like a genius on te radio..
-
Andrew Robertson, in reply to
One might politely enquire as to how the percentage of Maori respondents currently compares to the population proportion.
Not really. The methodolgy isn't designed to generate a representative sample in the first place.
With this sort of self-selecting design, modelling and adjustments are performed before results can be generalised to a wider group (I would hope).
The days of representative sampling are coming to an end, so it's becoming less relevant to hold surveys up to this standard.
-
Andrew Robertson, in reply to
Put your graphs and statistical models away. You got a 12.1% response rate.
I hear this argument a lot, but you can have a very representative sample with a 12.1% response rate, and a very unrepresentative sample with a 50% response rate. The response rate is not the only indicator of sample quality.
-
Tze Ming Mok, in reply to
Final nerd point: though as linger points out, a pilot survey does not need to be a representative sample, in this case the '6 archetypes' that were based on that pilot/precursor survey, were accompanied within the 'Kiwimeter' by a percentage of the country they were meant to represent. But no disclaimer that there was no real representative sampling behind those percentages. These are actually misleading claims that are still being touted about. See Barry Soper in the Herald proudly proclaiming that his 'patriot' group is 36% of the country. Even worse, you get TVNZ reporting things like 'sport is the most important thing for New Zealand identity' based on the self-selective 'Kiwimeter' survey. None of this is accurate. The best you can call it is a 'good guess' based on demographic weighting. But of course, given that they fucked up the execution, you can't even really call data coming from the 'Kiwimeter' itself 'good' - more like a 'fatally compromised guess'.
-
Andrew Robertson, in reply to
None of this is accurate. The best you can call it is a ‘good guess’ based on demographic weighting.
I dunno… we don’t know much about how they calculated their population estimates.
I’ve become much less critical of self-selecting surveys. Check out this one (PDF) for example. They show it was possible to forecast 2012 US election results using extremely large samples of highly unrepresentative data from opt-in polls taken over the Xbox gaming platform.
I guess the caveat is this is just one published study (out of how many studies that tried to do something similar but got it totally wrong?).
-
st ephen, in reply to
you can have a very representative sample with a 12.1% response rate
Purely by chance though, no? And only if it turns out that "willingness to fill in an on-line survey" is not correlated with the things you're trying to measure. I'd have thought that "willingness to fill in a survey to see how you rank on the Kiwimeter" is highly positively correlated with the beliefs of the sort of people who feel intensely proud to be Kiwis.
I guess I hear the other side all the time - "these methods are accepted and widely applied (Wagon-Circling, 2011) and the response rate is typical for this type of survey (Gravy and Train, 2012)". -
st ephen, in reply to
Sorry, that reads as more snarky than intended. I was going for light-hearted cynicism, given that every profession can be accused of the same sort of thing.
-
Andrew Robertson, in reply to
Hey, no worries at all.
I’m not for a second saying the Kiwimeter sample is representative. I’m just saying response rate is not the only indicator of sample quality.
For example, say we carried out a survey about legalising prostitution, in two different ways.
1. We post a self-complete questionnaire along with a nice pen to a random sample of people, follow up with two reminder post-cards, and mail second copy of the questionnaire to non-respondents. We get a response rate of 45%.
2. We carry out a random telephone survey where we make no mention of the topic during the introductory script, and we get a response rate of 25%.
Both of these approaches might cost about the same for a given number of respondents. However in this instance, I’d trust the results of the phone survey a lot more than I’d trust the results of the postal survey (which would have severe response bias directly related to the topic in question). The telephone survey still has response bias, but it’s not related to the topic, so it’s probably easier to make adjustments by weighting the final result.
-
Rabon Kan sums up the Kiwimeter nicely.
A giant selfie stick, Kiwimeter is an intrepid journey deep into the Kiwi heart, with you as the expert, and you as the star, but unfortunately via the colo-rectal passage.
-
Moz,
I’d trust the results of the phone survey a lot more than I’d trust the results of the postal survey (which would have severe response bias directly related to the topic in question)
I wouldn't have guessed that, which is another reason you don't want amateurs designing these things. Can you fill in the blanks a little - or are you just having a dig at people who think "nice pen" is a good way to bribe someone?
Is not having a postal address strongly correlated with a particular opinion on prostitution? Or is it the highly mobile people who tend not to get mail sent to addresses that have percolated through however many layers to get to a survey company? Or even less likely to get through, are you assuming it'll be based on the electoral roll (which could presumably be justified if you cared more about voters than non-voters).
(also, thanks for the Raybon line Alfie, that was funny)
-
Andrew Robertson, in reply to
Can you fill in the blanks a little – or are you just having a dig at people who think “nice pen” is a good way to bribe someone?
Postal self-complete surveys can have severe non-response bias because people can see all the questions before choosing whether to take part. If the topic of the survey is controversial, or emotionally or politically charged, then the response bias will tend to be more severe.
-
st ephen, in reply to
Perhaps it's more that people who hate everything about prostitution might also hate having anything to do with a mailed out survey on the topic, but will nevertheless answer questions politely when talking to the nice man on the phone. Likewise people who think prostitution is a complete non-issue may lack motivation to fill in a form and mail it in. A phone survey might stand a better chance of capturing which of those two groups is larger, even if the overall response rate is lower.
I think my beef is perhaps that the issues I want to get a handle on are thought to come from a sense of frustration, disillusionment and disengagement. An opt-in online survey with a small response rate doesn't seem like a good way of finding out what the disengaged are thinking.
-
BenWilson, in reply to
But of course, given that they fucked up the execution, you can’t even really call data coming from the ‘Kiwimeter’ itself ‘good’ – more like a ‘fatally compromised guess’.
It could be the "better than nothing" guess, if there really was nothing already. But there's not nothing...
-
James Littlewood*, in reply to
1. Social research objectives, market research method. Massively market oriented client (or at least, survey host) with zero demonstrable or stated interest in social goals.
2. I thought that's what I said.
3. That's certainly true.
4. TVNZ certainly looks like the client. They haven't named any of the academics they refer to, nor their departments, nor disciplines other than "social science." The academics themselves have not been claiming themselves as authors of the report, nor drawing any media attention to any of its findings. In short, they're nowhere to be seen. Which they would be, if they'd put any money or serious amount of time into it.
-
James Littlewood*, in reply to
That’s marketing not research
Yeah, true. Not *even* market research.
-
Although you could also label it as the new low in news research.
-
Marc C, in reply to
"Media organisations will want a survey with results reflecting their target disposable-income demographic, and biases in that direction are fine with them."
MSM and “bias”, oh, do you really think they are biased?
I have followed the MSM for years, and bias seems to rather be the rule rather than the exception, that is some forms of bias, which may in many cases only be subtle, but nevertheless real.
Look at how much Maori contents we get on TVNZ, and we already can see one form of bias there. With the rise of the internet and more online and social media, the majority of TVNZ viewers are likely to belong to a “more mature” group of the population, meaning older than the average New Zealander.
Having kept costs and their own research expenses low for years, is it any wonder that the MSM, including TVNZ, increasingly label anything as a “survey” or “research”, that may just appear to be such, but does not meet truly scientific criteria.
This and the earlier post by Tze Ming Mok are excellent, raising issues we need to discuss and further analyse.
MSM are generally going down hill with their standards in reporting, that is what I observe daily, so click bait and this kind of “survey” stuff is what seems to become the new normal now. I am very worried about where the journey is going.
But we do at least still get more representative data on New Zealand and its people with the Census that is conducted every few years.
-
Andrew Robertson, in reply to
1. Social research objectives, market research method. Massively market oriented client (or at least, survey host) with zero demonstrable or stated interest in social goals.
Oh okay. In my view Kiwimeter doesn’t use market research methods.
- Self selecting sample – frowned upon by market researchers.
- Psycometrics – hardly ever used by market researchers.
- Extremly large sample – unusal in market research.
- Data analytics – majority of market researchers don’t do this.
- Data used by academics to publish in peer reviewed journals – my perception is academics usually look down their noses at market research (Also – I think the study I referenced in Political Science demonstrates an interest in social goals among those involved in Kiwimeter) -
Tze Ming Mok, in reply to
Yo Andrew, thanks for bringing this up. Yep, as I implied in the original post, YouGov is the market leader in this kind of non-probability sample online polling, although their weighting and sampling of their panel is based on masses of data and their algorithm is a closely guarded industry secret. Maybe Vox is as good as them; maybe not. My guess (and it can only be a guess) is 'not', given that the standard is high, and the level of understanding that has come out of Vox of the New Zealand population, as well as general understanding of principles of survey research design seems a bit lacking.
Anyway, YouGov's panel has had the honour of being no worse at predicting election outcomes than probability sampled phone polling - i.e. in the case of the most recent British election, they all got it wrong. In the post-mortem breakdown of What Went Wrong With the Polls, Britain's leading psephologist found that the only survey that produced data to accurately match the election results, was the British Social Attitudes survey, run by dun-dun-DUN, my old employer the National Centre for Social Research. Because it was not only a probability sample, but was a repeated-attempt in-person CAPI, i.e. an army of middle-aged ladies swarming across the country, hounding people chosen in the random sample repeatedly until they answered the door. So representative samples are not the be all and end-all, but *how* surveys are delivered are obviously crucial - the context, the mode, the adherence to quality and targets... It's expensive and slow to do this kind of surveying - it's 'oldskool' in a time of wanting quick online fixes that are 'good enough'. But I actually think it's incredibly important - as the British election evidence and our Kiwimeter' acceptability problem shows - that it's worth holding up some survey methods as akin to a gold standard, otherwise the baby is out the window with the bathwater.
-
Tze Ming Mok, in reply to
I'd say if academics look down their noses at market research, it's mainly because of the goals of the research, and possibly because there's not much quality control in the overall market. However, applied social researchers, whether in academia, research institutes or independent policy evaluation (I've been in all of these areas), know that the big market research companies (Ipsos, TNS, etc) are methodologically on point and are their main competitors for the same research contracts. In the UK, where there is actually some money in policy research and evaluation, the big market research companies have their own subdivisions dedicated to social and public policy research, rather than commercial product research, and they all draw from a similar pool of social research academics and methodologists, or the students of those academics and methodologists. There's a commonality at that level, when it comes to standards.
Vox Labs though, not really on that level - they're not YouGov, and they're not Ipsos. They're a start-up run by a PhD student - no big diss there, I'm a PhD student - but we are not talking about a large organisation with a long history of high quality market or social research and a deep understanding of both traditional and innovative survey methodology. They're a small start-up with one gimmick.
-
Andrew Robertson, in reply to
It’s expensive and slow to do this kind of surveying – it’s ‘oldskool’ in a time of wanting quick online fixes that are ‘good enough’. But I actually think it’s incredibly important – as the British election evidence and our Kiwimeter’ acceptability problem shows – that it’s worth holding up some survey methods as akin to a gold standard, otherwise the baby is out the window with the bathwater.
No argument here. I have designed and run door-to-door probability surveys in NZ, so understand their value. They are becoming rare though. So rare that in a decade or two I doubt any private company in NZ will offer them. It costs a lot to maintain a national face-to-face field force.
The job of the good researcher is changing I think. It will become less about understanding data collection methods, and more about knowing how to sift through the garbage to uncover insights.
As an aside (and in support of your argument), it’s nice to know that the two polls that have come closest to predicting that last two General Elections in NZ both use probability sampling (or try to approximate it).
-
Tze Ming Mok, in reply to
Case in point: If van der Linden had just Googled 'cognitive testing surveys' after I tweeted at him, this would have been the first hit:
-
Andrew Robertson, in reply to
I work in the Social Research division of a NZ research company. I will often recommend and offer cognitive testing. You need to pick the right projects/clients though. It can be a really hard sell because it’s quite expensive when it’s done properly (and nearly all of the big research jobs are competitive tenders, where price is a big factor).
Also, from what I’ve heard from clients, cognitive testing is quite often not done properly (they often seem really surprised after they’ve observed me carrying out cognitive testing, or read my cog. testing reports).
-
Ian Dalziel, in reply to
pre-cogs unite!
I will often recommend and offer cognitive testing. You need to pick the right projects/clients though.
You'd think they'd be able to see the effects of cognitive dissonance, which is rife in modern society (and politics), and want to exclude its skew to get the clearest results...
-
Andrew Robertson, in reply to
You’d think they’d be able to see the effects of cognitive dissonance, which is rife in modern society (and politics), and want to exclude its skew to get the clearest results…
When it comes to the big research projects, the final procurement decisions are usually made by a panel of people, one or two of whom may be researchers, and the others will be internal stakeholders at various levels of seniority (usually more senior than researchers).
The researchers may understand the value of cognitive testing, or they may not. However they are just one voice among others (often more senior people who place more value on what the research will help them achieve and the budget they have, than on how it's conducted).
Post your response…
This topic is closed.