AI Web Browsers Are Not Secure

For decades, browsing the web has largely been the same—even if the web itself has changed dramatically. Your web browser may have new features compared to when it first launched, but at its core, you type in a website, and you scroll through it.
AI browsers aim to change that experience. Companies like OpenAI and Perplexity want to do more than just let users scroll and click through websites. Instead, the web browsers these companies make incorporate AI to change the experience in two key ways. First, you’ll find an AI assistant at the ready, usually contained in a collapsible side menu. The idea is, you can call up your assistant whenever you’d like, and ask it questions about the websites you’re reading. If you want an article summarized, or to learn more about a given topic, you could ask the assistant for help.
But the second change is the one that’s far more futuristic. These AI browsers come with something called “agent mode,” which allows the AI to actually interact with these web pages on your behalf. Rather than simply ask the assistant for information, you can ask the AI to perform tasks in your stead. If you want to plan a vacation, the agent can attempt to make the booking for you, scrolling through flights and hotel deals, selecting the ones that match your needs, and entering all relevant personal and financial information to make the booking. If you want to order dinner, you can ask the browser to find you a top-rated restaurant to order from.
It sounds like something out of science fiction, but these browsers are real, and they are here. OpenAI’s is called ChatGPT Atlas, while Perplexity’s is called Comet. Yet, while you can download these web browsers and try these AI agent modes today, cybersecurity experts strongly recommend you do not. At this time, AI browsers are highly insecure, and are susceptible to what are known as prompt injection attacks.
This vulnerability stems from the nature of AI browsers themselves: the agent mode. Since the AI is taking directions from the user, it follows all commands it receives. Bad actors, then, can hide instructions within websites, so when an AI agent encounters it, it assumes the users gave it that direction. The user, however, won’t know this direction was given, so the AI will act on behalf of the actor, without the user’s awareness. You might have been trying to book a trip, but your AI assistant is now opening a malicious website instead.
Share This



