Press distortion of public opinion polling: what can, or should, be done?

Pollsters were, not unreasonably, pretty content with their performance in calling the result of this election. Barring a little reflection on why YouGov’s much-touted MRP poll erroneously showed a narrowing electoral contest, the big beasts of the industry were happy to share the proximity of their final share projections to the actual result. No soul-searching methodological enquiries needed for this election.

But voting intention polls are well developed and very different from political opinion polls that are often used – and abused – by press interests wishing to promote a specific political agenda. We have seen plenty of evidence of this over the last three and a half years, particularly during some of the dramatic political upheavals of 2019 (now, of course, consigned to history). There is a dangerous complacency in assumptions that polls stand as a scientific barometer of public opinion.

A perfect example of the problem emerged in mid-August during the parliamentary Brexit gridlock, when Johnson floated the idea of proroguing Parliament. The crucial political calculation (beyond whether or not it was lawful) was whether it would alienate voters.

Luckily help was at hand. The Daily Telegraph – newspaper turned propaganda freesheet for Johnson then and throughout the election campaign – produced stunning evidence that the public were onside their front page headline declared “Public backs Johnson to shut down Parliament for Brexit” (image here). Predictably, as broadcasters love their front page reviews, this was repeated by Newsnight’s Emma Barnett and other newspaper reviewers. Polls are science, right? What’s the problem?

In fact there were multiple problems with the poll and the paper’s distorted reporting. The headline emerged from a question which invited respondents to agree or disagree with the statement “Boris needs to deliver Brexit by any means, including suspending parliament if necessary, in order to prevent MPs from stopping it”. Not only a double question (which every first year undergraduate knows is methodologically unsound) but cuddly “Boris” rather than PM or Johnson; and an explicit “explanation” that MPs were intent on stopping Brexit rather than preventing a no-deal Brexit.

This inherent question bias was exacerbated by the Telegraph’s deliberate distortion of data to produce their dramatic headline. The public’s “backing” was deduced by ignoring one in five “don’t knows” to produce a “majority” – despite only 44% agreeing with their seriously dubious statement.

This deliberate distortion of apparently scientific surveys to further a newspaper’s political agenda is dangerous because it creates momentum – as it is designed to do. Spurious conclusions are woven into broadcast stories rather than systematically dismantled or, even better, ignored altogether. It is a perfect route for a predominantly right-wing press to offset declining circulations by driving the news agenda online and on screen. And it is dangerous because it deliberately manipulates “the popular will” to fit that publication’s narrative.

What can be done? To some extent, the horse has bolted: despite the popular vote of the last 3 elections (2 generals and 1 euro) demonstrating no majority for Johnson’s hardest of Brexits, we leave the EU with the myth intact that he is implementing the popular will. But distortions in pursuit of a political agenda will happen again, and the polling industry should take some responsibility for scrutiny.

There is some hope on the horizon. In November, the Market Research Society teamed up with Impress, the small press regulator, to produce a document on “Using surveys and polling data in your journalism”. For publishers that belong to Impress and those polling agencies which are members of the MRS, it provides guidance to best practice.

But there lies the problem. While the MRS takes a dim view of poor survey design and has robust procedures in place for complaints, many pollsters – including ComRes, authors of the Telegraph poll – eschew the MRS for the British Polling Council (BPC). Te raison d’etre of the BPC is “to ensure standards of disclosure that provide consumers…. with an adequate basis for judging the reliability and validity of the results.” In other words, as long as sampling, questions and data are published and transparent, there is no quality control.

As for the big press publishers, they want nothing to do with an independent regulator set up to follow Leveson’s guidelines, and continue to promote their puppet regulator IPSO as a genuine arbiter of press standards. Given its abject failure to implement its own code of practice, it is scarcely likely to find such misbehaviour problematic.

For the moment, polling and publishing industries will continue to take advantage of lax standards. It will be left to responsible broadcasters, academics, and ordinary members of the public, to scrutinise political polls and call out manifestly bogus claims about public opinion.