Official Warns Hackers Are Already Using AI

You might use ChatGPT to see how well it can put together a recipe for dinner, or even to see how well it can write an inconsequential email. But hackers are using the tools behind generative AI for bad, and it’s only getting worse. As Canada’s top cybersecurity official, Sami Koury, told Reuters, hackers are asking AI to make them
July 24, 2023
 / 
meritsolutions
 / 
Image

You might use ChatGPT to see how well it can put together a recipe for dinner, or even to see how well it can write an inconsequential email. But hackers are using the tools behind generative AI for bad, and it’s only getting worse.

As Canada’s top cybersecurity official, Sami Koury, told Reuters, hackers are asking AI to make them malicious programs, write up phishing emails, and spread disinformation across the internet. While Koury didn’t dive into specifics, the warning mirrors other concerns from watchdog groups and officials in both government and industry.

These warnings assert bad actors are abusing the power of large language models (LLMs), which are trained on massive databases of text and language. These trainings empower the models to produce human-like text, including things like malicious software and phishing emails. It’s dangerous, as it allows hackers to step up their game: Before, phishing emails were often riddled with spelling and grammar errors. Now, LLMs can help ensure these malicious emails read like they were written by a professional.

In theory, it also aids hacker with creating malicious code they might not otherwise have been able to create themselves, as LLMs can be trained on huge volumes of malicious programs, like they are with simple language. Lucky for us, we’re not at the point where hackers can ask for effective malware and LLMs can spit out a working piece of malware. But that isn’t too far in the future.

Share This

Leave a Reply



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.