Microsoft’s Copilot Can Be Used as an “Automated Phishing Machine”

Copilot is Microsoft’s AI platform, available to both consumers and companies through a subscription to Microsoft 365. But while there are benefits to the company’s artificial intelligence features, there are also risks: As it turns out, a skilled hacker could turn Copilot into, what Wired calls, an “automated phishing machine.” Wired reports that security researcher Michael Bargury has discovered five
August 20, 2024
 / 
meritsolutions
 / 
Image

Copilot is Microsoft’s AI platform, available to both consumers and companies through a subscription to Microsoft 365. But while there are benefits to the company’s artificial intelligence features, there are also risks: As it turns out, a skilled hacker could turn Copilot into, what Wired calls, an “automated phishing machine.”

Wired reports that security researcher Michael Bargury has discovered five unique ways Copilot can be abused to act as a dream tool for spammers and phishers. Essentially, Bargury has figured out a way to take the standard use case of the AI—typing prompts the AI uses to retrieve data—and twists it by adding prompts to trick the AI into acting maliciously.

The most concerning example, which manipulates Copilot into an automated phishing machine, Bargury calls “LOLCopilot.” Once a hacker breaks into your work email, they can use Copilot to identify who you email, impersonate how you write your emails, and send an email to a large group of users with a malicious attachment. Bargury says, “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf.”

In another example, Bargury uses Copilot to determine the salaries of his hypothetical coworkers. He asks Copilot to retrieve this salary data, and instructs the bot not to add references to each file it retrieves the data from. Doing so, Bargury is able to bypass protections Microsoft has in place for sensitive files.

That past example also assumed the hacker had access to the target’s private emails, but not all examples need that access. Bargury demonstrated how “poisoning” Copilot’s database by giving the AI an email with a malicious message can convince the bot to provide a target’s banking information. A fourth example examines how a potential hacker could use the same tactic to discover whether a company’s earnings call will be good or bad, while a final demonstration shows how a hacker can make Copilot return malicious links to targets’ queries.

Microsoft, for their part, doesn’t deny these vulnerabilities exist. In fact, the company thanked Bargury for his work in identifying these issues: “The risks of post-compromise abuse of AI are similar to other post-compromise techniques…Security prevention and monitoring across environments and identities help mitigate or stop such behaviors.”

The lesson here, is that while AI has legitimate benefits for companies and employees, giving these systems too much data too soon, without understanding how to prevent bad actors from abusing those databases, is risky. Researchers like Bargury help steer the conversation in the right direction, and force companies like Microsoft to continue to improve their platforms to make sure the products we use at work, don’t work against us.

Share This

Leave a Reply



Sign Up for weekly MERIT Security Briefing

By signing up, you agree to our Privacy Policy.