Was it ‘AI wot won it’? Hyper-targeting and profiling emotions online

Prof Vian Bakir

Professor of Political Communication & Journalism at the University of Bangor. Vian researches strategic political communication.

Dr Andrew McStay

Reader in Advertising and Digital Media at the University of Bangor. Andrew researches profiling of emotional data.

Section 5: The Digital Campaign

The 2017 UK General Election has shown that social media are the real battleground, and this plays by rules typically found in Adland.

Facebook is where the action has been. Using technologies built with advertising and commerce in mind, political parties have targeted voters on the basis of age, gender and postcode. Regional targeting allows hyper-local messaging of a kilometre wide, allowing campaigners to pitch policies to resonate with hyper-local interests. Furthermore, Facebook allows parties to segment by our ‘likes’ and online engagement, what we say and what photos we post. This grants richer insight into life-stage, employment and views about issues. It also allows for automated psychological profiling at scale – in particular, targeting people’s emotional triggers.

While analysis of the emotional valence of the UK 2017 General Election has yet to materialise, the 2016 EU referendum provides insights into emotional profiling and targeting of voters. Dominic Cummings (campaign strategist for ‘Vote Leave’) documents the potency of Leave’s message on ‘Turkey [i.e. immigration]/NHS/£350 million’. While dedicated voters are unlikely to change their views on the basis of ads, others can be persuaded by data scientists and their behavioural insights. Indeed, it only takes a small number of ‘persuadables’ to swing a close election: according to Cummings, Brexit came down to ‘about 600,000 people – just over 1% of registered voters’.

Cummings also explains that, given Vote Leave’s many campaigning disadvantages (including inability to control the referendum’s timing, the fact that Vote Leave bucked both the government’s wish and the status quo, and that British broadcasters were pro-‘Remain’), Vote Leave relied on data scientists. Vote Leave spent 98% of its budget on digital (rather than mainstream media) advertising, with most spent on ads that experiments had demonstrated were effective, these positioned at the campaign’s end as ‘adverts are more effective the closer to the decision moment they hit the brain’.

Debate about the role of hyper-targeting, analytics and what is essentially information warfare was intensified when the alleged role of Cambridge Analytica in 2016’s EU referendum and US presidential election campaigns was revealed. We should not depict this as a democratic collapse, not least because advertising technologies are not as effective as their sales teams tout. However, the prominence of analytics companies is cause for concern, especially regarding transparency of their activities to the Electoral Commission. The issue is this: technology platforms and companies are typically partisan, they have insight into what we think and feel, and this raises questions about scope for unhealthy and non-transparent influence.

Particularly revealing was a Sky News report on 16 May 2017 focusing on comments by Will Critchlow, chief executive of digital consultancy Distilled. Concerned about the UK’s lack of oversight on digital campaigning (for instance, parties are not required to publically record all their spending on social media), Critchlow warned about Facebook’s hyper-targeted, hyper-local messages that, due to their nature, are invisible to most people (including journalists). Other techniques are: (a) creation of fake pages to attract opponents, using this to plant cookies in their browsers and then delivering attack adverts; (b) rankslurs – namely, creating damaging websites designed to appear in Google’s search rankings for your opponents; and (c) impersonation – namely pretending to be a candidate’s aide on Twitter, then expressing plausible but damaging opinions.

Whether done by bots or human influencers, that people may be surreptitiously emotionally engaged in online debates is deeply worrying. There are precedents for such targeting of sentiment and feelings, not least the Facebook Mood study that revealed evidence of emotional contagion. This showed that exposure to a particular type of emotional content in users’ Facebook news feeds stimulates posting behaviour that reflects the emotional charge of that content. In other words, change the informational context and you can change behaviour.

What political campaigners are doing is feeling-into online collectives, measuring individual and collective sentiment, and gauging ‘fellow-feeling’. The scope to employ what McStay calls ‘empathic media’ to quantify emotional life means that big technology companies will only improve at profiling and quantifying our interests, fears, concerns and hopes. It further opens the door for automated targeting on an unseen scale, not least through algo-journalism and software capable of hyper-localised and increasingly realistic fake news. Indeed, analysis of Twitter in the 2017 UK General Election campaign shows that 16.5% of traffic about UK politics was generated by highly automated accounts. Furthermore, while professional news organisations accounted for 54% of the relevant content shared, fake news accounted for 11% – a not insignificant figure.

Due to technology and wealthy political interests, election campaigning is no longer transparent. What is evident, however, is the rise of automated online political profiling, measuring emotions, stimulating behaviour, algo-content, involvement of technology giants and rich donors happy to subvert electoral processes. It is surely time to critically challenge profiling designed to invisibly push our emotional buttons.