AI Poses Risk to U.K. Elections

We’ve discussed before how experts worry artificial intelligence and the technology that comes with it poses a potential risk to elections here in the U.S. But we are not alone in these concerns: Other countries are also worried about how AI will impact the security and integrity of their elections. The latest? The United Kingdom. Citizens of the U.K. will
April 9, 2024
 / 
meritsolutions
 / 
Image

We’ve discussed before how experts worry artificial intelligence and the technology that comes with it poses a potential risk to elections here in the U.S. But we are not alone in these concerns: Other countries are also worried about how AI will impact the security and integrity of their elections. The latest? The United Kingdom.

Citizens of the U.K. will head to the polls on May 2 to elect leaders in local elections. Later this year, they’ll vote in the general election, once Prime Minister Rishi Sunak chooses an election date. As both elections kick into gear, however, they may find themselves at risk of both cybersecurity risks and misinformation attacks, courtesy of, among other things, AI.

The cyberattack campaigns may have already begun. As reported by CNBC, the U.K. says a hacking group connected to the Chinese government tried breaking into the email accounts of British lawmakers, but failed in their attempts. The U.S., Australia, and New Zealand all retaliated with sanctions against China in response.

What will happen next is anyone’s guess. But we’ve seen the AI playbook before: Bad actors may abuse AI to create misleading pictures, videos, or audio clips to trick citizens (and perhaps in some cases lawmakers themselves) into believing things that didn’t actually happen. AI programs can generate a video of a “politician” saying something they never actually said, or calling voters directly with a message they never recorded or approved of. Voters now need to worry about being tricked by things we’ve never had to worry about before.

Of course, the old worries are still here, only powered by AI. Phishing campaigns are likely to be in full force this election as well, as bad actors seek to compromise the personal information of anyone who works within the political system, including politicians, staff, and institutions. AI can help write the emails, messages, and fake websites bad actors send to victims, and make them more realistic than before.

That’s the key issue here: More bad actors than ever can now use AI to spread misinformation and engage in cyberattacks. The technology enables those who might not have known how to engage in these activities, or those who created campaigns that were obvious fakes, to potentially generate convincing misinformation or phishing messages.

While some companies, like Meta, are working on adding watermarks to AI content generated by their tech, the guardrails are certainly in their infancy, which places all the advantage on those looking to spread misinformation and phishing. As such, you should continue to use your best judgement when using the internet: Double-check images and videos you watch to make sure they actually happened; don’t trust everything you read; be highly skeptical of any strange message sent your way.

It’s too bad we need to by so cynical on the internet these days. But when elections are literally on the line, it’s more important than ever.

Share This

Leave a Reply



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.