During April’s BCS Policy Jam on deepfakes and their impact on elections, Lord Clement-Jones, the Liberal Democrat peer and Co-Chair of the All-Party Parliamentary Group on AI, was clear: “We’re in a bit of a crucible now for this kind of technology, and I think that it’s unfortunate we weren’t able to anticipate it would be in such prolific use before this year started.”

He was referring to the explosion of audio and video deepfakes circulating on social media and encrypted messenger apps. In 2018, there were a few thousand doing the rounds—now, it’s in the billions.

This exponential growth has been aided by the accessibility of cheap, easy-to-use AI software to create the material. At the same time the technical solutions to counter the problem are still primarily in development.

Meanwhile, half the world’s democracies are heading to the polls in 2024, and it’s likely the UK will hold a general election in the autumn.

 

Survey results

The BCS Policy Jam began with the BCS Director of Communications, James Woodward, giving an overview of a recent BCS member survey completed by just over 1200 of our tech experts. The majority, at 65%, were concerned that deepfakes would affect the outcome of the UK election. A further 92% of technologists said political parties should agree to publicise where and how they use AI in their campaigns.

The poll also found that public education and technical solutions, such as watermarking and labelling, will be the two most effective measures for limiting the detrimental impact of deep fakes on democracy.  

Only 8% were optimistic that a pact signed by several major tech companies would be effective. The firms agreed in February to adopt ‘reasonable precautions’ to prevent AI from being used to disrupt democratic elections around the world.

No great surprise

Lord Clement-Jones was joined on the BCS Policy Jam panel by Tom Bristow, a Tech Reporter from the news organisation Politico, Lisa Forte, a partner at Red Goat Cybersecurity, and Hannah Perry, from the think tank Demos. The debate was chaired by Claire Penketh, BCS Senior Policy and Public Affairs Manager.

The panels overall response was that no one was greatly surprised by the survey results. Lord Clement-Jones said in the UK, we’ve already had viral deep fakes of the London Mayor, Sadiq Khan, the Labour Leader, Keir Starmer, and the PM, Rishi Sunak.

Lisa, who specialises in studying disinformation campaigns, hacktivists, and ransomware groups, said the impact could be more nuanced than influencing people whom to vote for: “It can also be the demobilisation of voters, and we’ve seen that become very evident in a lot of countries where people have just got to a point where they just thought, ‘Well, I’m not going to vote’, and that can also have a huge effect.”

Tom, who closely follows the latest legislation on online safety, misinformation, digital competition, and markets, agreed that the ‘biggest threat’ was that the public would switch off from elections. But he refused to be too pessimistic about the future of democracy. He said the pact by the tech companies to do their best to counter the influence of deepfakes was an encouraging ‘first step’– adding it was in Big Tech interests too: “If people don’t believe anything they see or read, then the business model for search for social media is also threatened. So, this isn’t just about public interest, democracy and the media.”

Global picture

Deepfakes are a global phenomenon. We recently heard, for instance, of a fake robocall made of President Biden in the US. Lisa spoke about the misinformation and disinformation in a local election in India last year. She said the main platform used in that case to spread the deepfakes was WhatsApp, pointing out that social media is not the only culprit.

Tom mentioned the elections held in Pakistan, Taiwan, Indonesia, and Slovakia. A fake recording of a candidate saying he’d rigged the election went viral in the latter country. But Tom said the jury was still out about whether deepfakes affect election results: “Making a deep fake and making it count are two different things. So, we just have to be careful not to confuse the two. Just because it’s easy to do doesn’t mean it will swing an election. So far, we’d struggle to point to examples where a deep fake has widely influenced elections.”

No silver bullet

Lord Clement-Jones said he’s not a ‘mad regulator’ but added: “I’m moving towards the idea that we need to actually ban the fakes, not just for pornography but for other purposes, and not just for sharing. We also need to go back up the food chain to the AI [software] developers who create deep fakes.”

But that would be tricky, said Lisa: “A very large number of people who are causing the most damage, in my opinion, are nefarious actors or are other states. If you think you’re going to put out a call to get everyone to agree to watermark their generated content, it’s a bit of nonsense. You’ll then bring in the hacktivists and other nefarious actors.”

Tom was also sceptical about whether banning deepfakes would work and said there should’ve been more in the Online Safety Act around strengthening Ofcom’s powers to deal with mis- and disinformation. He added the tech platforms could do more to take down such material.

The friction factor

The panel discussed using AI to catch AI by finding a tech solution to slow the flow of fake news, which, in effect, introduces ‘friction’. Lord Clement-Jones said: “We talked about this when we did the Online Safety Joint Committee scrutiny of the draught bill because one of the real difficulties is the amplification of content. But I haven’t actually seen a tool that does that, and I’d be really interested to see something along those lines.

“I don’t think there’s any silver bullet, and I don’t think regulation is necessarily a silver bullet, but the combination of all this might slow down people’s ability to spread this kind of information.”

However, Tom said: “I think you’d have a tough time convincing social media firms to put in some kind of circuit breaker. They want viral content, and the more viral, the better.”  Both Lisa and Tom said having prompts – such as, Lisa suggested, a label that said someone hadn’t read a contentious social media post before sharing it – could also be helpful.   

Tapping into the human, not the tech, response

Fact-checking fake news takes time – so what about the more instinctive, visceral reactions of conscientious content consumers?  Lisa said there was research from the Massachusetts Institute of Technology around this: “Instead of fact-checking, think about emotions. MIT’s study found that the posts most likely to go viral regarding fake news were broadly classified under the disgust emotion. Instead of critiquing an image or video, turn that thought inwards and think, is this generating this emotional feeling of disgust, horror, and shock? If so, I need to apply an extra level of critique to this before I post it. It’s almost recalibrating the human approach instead of thinking about what can be done at a content level.”

Hannah Perry, Lead Researcher at CASM, the digital research hub at Demos, believed media literacy should be compulsory in schools for Personal, Social, Health, and Economic Education. This would give pupils the critical analysis tools to understand what they see on social media and help them think twice before sharing it.

There is also the issue of confirmation bias, in that we’re more likely to think something is true if it chimes with our core beliefs. Both Tom and Hannah brought this up, and Hannah said, “For me, the way that you combat that is through tackling those biases directly. If we have knowledge of stereotypes and those sorts of systemic biases in our society, then we are better equipped to manage our own biases when consuming content.”

Ramping up the tech solutions

There was a lively discussion in the chat from the 100-plus attendees who logged on to watch the policy jam. There was general support for better education in digital literacy and discussions around tech solutions. In our survey, watermarking and labelling AI-generated material were the preferred approach. Professor James Davenport, a BCS Fellow who has researched this area, joined the panel discussion.

On watermarking pictures, he said: “This will guarantee that the photograph was genuine and hadn’t been altered. Of course, there’s still the problem of whether it’s in the right place.”

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

As well as asking political parties in the UK to label their use of AI-generated content in their campaign material, another suggestion raised by the panel was a code of conduct and standards agreed between the parties in the UK when it came to sharing material of doubtful origin.

Tom said that seemed to have happened when it came to opposing parties amplifying the deep faked audio of the Labour Leader, apparently berating his staff. The bottom line for Tom was this: “I think the most effective thing that politicians can do ahead of the general election is take some of the responsibility themselves and call out when they do see deep fakes being used and come to some kind of agreement on not using it.”

He added that as the campaigns were likely to become more ‘bitter’ the closer the UK got to an election, the sooner the better for such an agreement.  

In the last few minutes of the BCS Policy Jam, everyone agreed that improving digital literacy and having faith in one’s gut instincts was vital, as was boosting the development of technical solutions to tackle deep fakes. Lord Clement-Jones reiterated that the creation of deep fakes needed legislation; in the meantime, existing and incoming laws could help. 

Only time and official research will show the actual effect of this widely available, potentially disruptive technology. But as for whether deep fakes will affect the upcoming UK vote, the Lib Dem peer said in his opinion that was unlikely: “The margin between the political parties is going to be so large in terms of who becomes the government, that no deep fake will be able to influence the outcome of the election.”