Scammers Are Using AI in Scary New Ways

Last week, the Boston Globe’s Beth Teitell wrote about a recent phishing experience she went through. Teitell received an invitation for a party from a friend—a party Teitell says she would have loved attending. She was so excited about the invitation, in fact, she didn’t notice that party started at an odd time (10 a.m.) on a strange day (Sunday) in a city far away from her. As you might expect, the party, and the invitation, did not exist.
As Teitell admits, she was the victim of a social engineered phishing attempt. Scammers took the time to research Teitell’s social network to find a person whose party she would likely want to attend. As such, that invite caught her attention, in a way that a typical or generic phishing email might not have. This is far from the only type of social engineering phishing out there: other tactics may even be designed for long-term gain, luring the victim over weeks or months until they hand over sensitive data or finances.
But while sophisticated phishing is bad enough, it’s only going to get worse with the rise of AI. Think about the workflow a phishing attempt like Teitell’s takes: You have to find your target (Teitell), investigate their social media accounts for a likely peer, then craft a convincing message that elicits a reaction from the target. That’s a lot of work, for just one target, who may not even respond to your request.
Now, imagine outsourcing that work to an AI: the autonomous tech can scour the internet looking through all sorts of relevant data points, including social media accounts, before landing on a suitable attack plan. From here, the AI can craft your phishing email itself, much faster than most humans. Teitell points to an IBM report that suggests an AI program can write as effective a phishing campaign in five minutes as a team of humans can in 16 hours.
That poses a number of risks, beyond this specific example. An AI that can collect every example of you across the internet—from images, videos, posts, and audio—can effectively steal your identity to trick other people. An AI could train itself on audio from your videos to create a convincing copy of your voice, which could then be used against your family members in phony phone calls asking for money.
To truly combat the risks of AI in phishing schemes, we likely need corporate and government action. But that doesn’t mean you can protect yourself. All of the usual cybersecurity best practices apply here: Don’t click links from strangers; be sure you know who you are communicating with at all times; be wary of phone calls, and always be ready to hang up and call back directly. But in a world where AI can steal your voice, it might be time to create a “code word” with trusted friends and family. That way, if you receive a phone call from someone asking for help or money in a high-stakes situation, you can ask for the code word. If they don’t know it, you know it’s a phishing attempt.
Share This


