Source: The Conversation (Au and NZ) – By Darren Pennay, Campus Visitor, ANU Centre for Social Research and Methods, Australian National University
Election polling has had a torrid time in recent years.
The dust is yet to settle on how well the polls performed in the recent US election, but the highly respected Pew Research Center is reporting that by the end of counting, the polls will likely have overestimated the Democratic advantage by about four percentage points.
Yet, the Coalition went on to win with 51.5% of the vote compared to Labor with 48.5%, almost the mirror opposite of what the final polls found.
After the election, the polls attracted widespread criticism.
In response, the Association of Market and Social Research Organisations and the Statistical Society of Australia launched a joint inquiry into the performance of the polls, which I chaired.
This involved trying to obtain primary data from the pollsters, assembling the sparse information in the public domain and finding additional data sources to inform our report.
The inquiry found Australian election polling had a good track record, by and large.
Across the ten federal elections since 1993, Australian pollsters had a 73% success rate in “calling the right result” with their final polls. The comparable success rate of US pollsters over a similar period was 79%, according to fivethirtyeight.
Australian pollsters had an even better track record in more recent times with 25 out of 26 final polls from 2007 to 2016 calling the right result, a phenomenal 96% success rate.
So what went wrong in 2019? With limited cooperation from the pollsters themselves, the inquiry identified a number of factors.
Conditions for polling, as reflected in response rates for surveys, got a lot harder. The report documents a decline in response rates for typical telephone surveys from around 20% in 2016 to 11% in 2019, with the polls likely to be achieving much lower response rates than this.
This recent fall in response rates was part of a longer term decline, coinciding with the increasing take-up of lower cost polling methodologies (predominately online and robopolling) and pressure on polling budgets.
It also seemed to be the case — perhaps lulled into complacency by a long period of relative success and a mistaken belief that compulsory voting made Australia different — that our pollsters did not heed the lessons emerging from the polling reviews into 2015 UK and 2016 US elections. These identified unrepresentative samples (in the UK) and the failure of many polls to adjust for the over-representation of college graduates (in the US) as primary reasons for poll inaccuracies.
The inquiry found no compelling evidence for the “shy conservative” theory — that people were afraid to admit their true intentions to pollsters — as a possible explanation for the performance of the polls in 2019. It also found no compelling evidence of pollsters being deliberately misled by respondents, or a comprehensive late swing to the Coalition that may have been missed by the polls.
But we did find the polls most likely over-represented people who are more engaged in politics and almost certainly over-represented persons with bachelor level degrees or higher.
Both of these factors are associated with stronger levels of support for the Labor Party and were not reduced by sample balancing or weighting strategies.
So, the performance of the polls in 2019 had the hallmarks of a systematic polling failure rather than a one-off polling miss. The pollsters were stung into action with several announcing their own internal reviews. They also launched an Australian Polling Council,
with the aim of advancing the quality and understanding of public opinion polling in Australia.
With the next federal election possible as soon as August 2021, the need for reform of polling standards in Australia is urgent.
The main recommendations from our inquiry are as follows:
a code of conduct: the development of the code could be led by the pollsters, but also informed by other experts, including statisticians, political scientists, the Australian Press Council and/or interested media outlets. Disclosure requirements for pollsters would be fundamental here — as well as how these are monitored, and how compliance is ensured. The code should be made public so it can hold pollsters, and those reporting on the polls, to account. It would make polling methods, and their limitations, more transparent. This will help foster more realistic expectations of polling.
methodology: pollsters need to investigate and better understand the biases in their samples and develop more effective sample balancing and/or weighting strategies to improve representativeness. Weighting or balancing by educational attainment seems promising, and the report suggests several other variables for further experimentation such as health status, life satisfaction and past voting behaviour.
conveying uncertainty: currently, polls are usually published with a “margin of error”. This isn’t good enough — it is often inadequately calculated and inadequately reported. Pollsters need to use more robust methods for conveying the variability associated with their results. In addition, pollsters should routinely report the proportion of respondents who are “undecided” about their vote choice and identify those who are only “leaning” towards a particular party.
get media outlets onside: Australian media organisations should comply with and actively support any new code of conduct.
provide educational resources: educational resources about polling methods and standards should be developed and made available to journalists, academics and others who use the results.
Election polling plays such an important role in informing decisions and shaping expectations ahead of elections. Time is running out to learn the lessons of 2019. Rapid implementation of our recommendations is vital.
– ref. How can Australia reduce the risk of another ‘systemic polling failure’? – https://theconversation.com/how-can-australia-reduce-the-risk-of-another-systemic-polling-failure-149984