FBI Warns of Increased AI Cyberattacks
The FBI is warning the public that the number of cyberattacks aided by artificial intelligence is skyrocketing. As Tom’s Hardware reports, these attacks include both phishing attacks as well as malware development, created by people abusing AI technology for nefarious purposes. Part of the issue here is AI’s ability to aid in the development of effective malware: In the past,
The FBI is warning the public that the number of cyberattacks aided by artificial intelligence is skyrocketing. As Tom’s Hardware reports, these attacks include both phishing attacks as well as malware development, created by people abusing AI technology for nefarious purposes.
Part of the issue here is AI’s ability to aid in the development of effective malware: In the past, in order to create malware yourself, you needed a good level of coding knowledge. After all, if you didn’t, you wouldn’t get far creating a malware that could do what you wanted it to.
Before AI, you could turn to hackers that would create the malware for you, but that would require a serious sum of cash to fuel the operation. With AI, however, you can ask the software to write the malware for you, including exactly what you want it to do.
But even this has some limitations: a ChatGPT-like AI might not be able to accomplish exactly what a hacker is looking for, especially if its developer (e.g. OpenAI) restricts its ability to create negative or otherwise malicious products.
Where the danger lies is if someone creates an open-source version of ChatGPT that is very good at creating malware for specific purposes. An AI trained on a database full of malicious software, encouraged by its developer, is concerning. The FBI won’t go public with any specific open-source versions of these AI models, either.
The next step is AI misinformation, such as “deep fakes,” which is video altered using AI to make it look or sound like someone is saying or doing something they never did. You can make a politician say they support a position on the other side of the aisle; you can make the President issue an order they never actually issued.
The FBI doesn’t have many solutions yet to fight against these emerging threats. One idea is to watermark AI content, so there will always be an easy way to identify whether something was digitally altered or not. But that type of tech isn’t here yet.
What you can do to protect the integrity of your business is to stay vigilant: Exercise caution whenever opening strange messages, and never click on links from contacts you don’t recognize. If you see a video that appears strange or out of context, you shouldn’t take it at face value.
Share This