Thinking About AI’s Role in Election Security

One of the biggest stories in cybersecurity over the last decade or so has been election interference. The internet empowers hackers in different countries to meddle in other nations’ election processes. But there’s a new factor we haven’t seen in action before: AI. AI as we know it is in its infancy. ChatGPT, the generative AI program that kicked off
October 3, 2023
 / 
meritsolutions
 / 
Image

One of the biggest stories in cybersecurity over the last decade or so has been election interference. The internet empowers hackers in different countries to meddle in other nations’ election processes. But there’s a new factor we haven’t seen in action before: AI.

AI as we know it is in its infancy. ChatGPT, the generative AI program that kicked off the modern AI movement, only launched back in November. It’s been less than a year since these generative AI programs have taken over the conversation, and we’ve seen them do a host of impressive things. Of course, impressive works both ways, and hackers have put AI to nefarious use.

As Fortune examines, 71% of citizens of democracies will have national elections between October 2023 and the end of 2024. Eyeing the situation from a US perspective, we know that other nations interfered with our elections in each year since 2016, so there’s no reason to think they won’t try again in 2024.

To that point, the Forbes article’s author discusses their time at a cybersecurity conference, which feature conversations about election integrity. Experts agreed that nations such as Russia, China, and Iran would interfere in 2024, but also “domestic actors,” bad actors based in the US that can use free or cheap generative AI to produce misinformation with the intention of disrupting the election.

“Persona bots” are one such potential method of spreading this misinformation. These bots post content automatically using AI, but not all posts will be bad. In fact, they’re designed to post standard content to resemble the average social media user, so the system believes them to be legitimate. Every now and then, however, they’ll post a piece of malicious political content. If you enlist thousands or even millions of these persona bots to repeat the scheme, you may have an effective way to spread this misinformation.

The author makes the excellent point that our security networks need to be sharing disinformation campaigns whenever they are discovered. The best way to fight against this malicious activity is to share knowledge, so that everyone has the information necessary to develop effective strategies. Only time will tell what part AI has to play in these elections, but if the past year has taught us anything, it will play a big part.

Share This

Leave a Reply



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.