
Brave researchers have revealed a troubling vulnerability in Perplexity’s Comet that shows just how risky AI-powered browsers can be. The flaw, known as an indirect prompt injection, allowed attackers to trick the browser into carrying out hidden commands.
The research was led by Brave engineer Artem Chaikin and detailed with VP of Privacy and Security Shivan Kaul Sahib. They found that Comet could not tell the difference between a user’s instructions and malicious text hidden inside a webpage. That oversight opened the door to serious account takeovers and data theft.
In their demonstration, a Reddit comment with invisible text instructed Comet to visit Perplexity’s account page, grab the user’s email, intercept a one-time password from Gmail, and then send the stolen data back to the attacker. Once the victim clicked “summarize page,” the AI did the rest automatically. No additional input was required from the user.
This kind of attack bypasses traditional web safeguards such as same-origin policy and CORS. Those protections normally prevent websites from stealing data across different domains, but when an AI assistant has full control of the browser, the rules break down. Because the AI operates with the full privileges of a logged-in user, it can move freely between services and access sensitive accounts without the user realizing what is happening.
Prompt injection is not new in the AI world, but this is one of the clearest examples of how it can be weaponized inside a browser. Traditional attacks like cross-site scripting depend on flaws in a single website. Indirect prompt injection, on the other hand, can be planted anywhere. A comment on Reddit, a hidden message in a blog post, or even a PDF file could contain invisible commands that hijack an AI browser once the user interacts with it.
That makes this problem bigger than Perplexity alone. Brave notes that it compared Comet to other agentic browsers such as Nanobrowser and found that similar designs could be exploited in the same way. The entire idea of letting an AI act as an agent on the web creates risks that old browser security models never had to account for.
For ordinary users, the danger is easy to imagine. If an attacker can get an AI assistant to read hidden commands, they can potentially steal email accounts, break into banking portals, access work dashboards, or leak files stored in cloud services. The AI does not distinguish between a task the user wanted and one that was planted by an attacker. Once it starts acting, it does so with the same authority as the logged-in human.
Brave argues that browsers need new rules to deal with this. First, AI assistants should clearly separate user instructions from webpage content. Second, any sensitive action like sending an email or submitting a password should require confirmation from the user. Third, agentic browsing should be kept apart from regular browsing so users do not accidentally put themselves in a vulnerable mode.
The disclosure timeline shows that Brave reported the issue to Perplexity in late July. The company acknowledged the report and pushed out a fix, but Brave’s retesting showed the patch was incomplete. After sending additional details and waiting through the disclosure period, Brave went public on August 20. Even then, it warned that Comet was not fully hardened against this type of attack.
The incident also highlights a race in the browser industry. Companies are rushing to add AI agents that can take actions on behalf of users, but security considerations appear to be lagging. Brave says it is working on stronger guardrails for its own Leo assistant, and hopes to push for standardization across the industry.
The larger question is whether users should even trust an AI with this much power today. Browsers have long been hardened against complex attacks, but AI changes the playing field. If an assistant can click through logins, read messages, and complete transactions, then hidden instructions on a single page can unravel all of that protection.
As agentic browsing moves forward, researchers warn that privacy and security must be built in from the start. Without new architectures to keep AI assistants in check, the convenience of asking a browser to “book me a flight” could come at the cost of handing over your most sensitive data.