AI Bots Are Susceptible to “Prompt Injection”
AI is the “it” tech of 2023, dominating the conversation as well as people’s interests. However, as impressive as generative AI can be, it’s also not immune from cyberattacks. As such, AI can potentially pose a security risk to your business. One such vulnerability currently being investigated is known as “prompt injection.” How prompt injection works In a prompt injection
AI is the “it” tech of 2023, dominating the conversation as well as people’s interests. However, as impressive as generative AI can be, it’s also not immune from cyberattacks. As such, AI can potentially pose a security risk to your business. One such vulnerability currently being investigated is known as “prompt injection.”
How prompt injection works
In a prompt injection attack, an attacker gives an AI bot a command in plain language instead of code, as you might expect in a traditional attack. Because the command is in, let’s say, English, the attacker can simply ask the bot to do whatever the attacker wants it to.
If you have an AI bot connected to work accounts, for example, an attacker could command the bot to scan your email for specific messages; search your PC for confidential documents; craft an email from your account saying whatever the attacker wants; among many other possible scenarios.
The good news is prompt injection is, at the moment, mostly theoretical. There aren’t any known cases of prompt injection that have happened in the wild. However, researchers have tricked AI bots with prompt injection in real-life tests, so it’s a legitimate possibility, and, thus, a legitimate threat.
In one scenario, researchers were able to hide prompts inside a web page, which the target AI bot took as a command, informing the subject they had just “won” an Amazon gift card. In another, researchers were able to break the AI bot’s parameters and limitations with a seemingly random string of commands, opening the chatbot up to answers that some might deem dangerous. If bad actors can figure out which words and commands trip specific AI bots in this way, that presents a troubling vulnerability.
How to protect against prompt injection
Until there are true protections in place against prompt injection, your best bet is to employ the same best practices you normally would with cybersecurity. Don’t use AI for work, unless it is part of your work in the first place. Make sure no one has access to your devices that use AI. That means keeping strong and unique passwords on your device as well as all your accounts.
Share This