As AI tools grow more capable, the risks surrounding them are evolving too. It is no longer just about whether a chatbot gives a wrong answer or hallucinates a citation. The bigger concern now is what happens when that AI can browse the web, connect to internal systems, pull data from third party apps, and act on a user’s behalf.
OpenAI clearly sees that shift happening. You see, the company has announced two new security features for ChatGPT: Lockdown Mode and standardized Elevated Risk labels. Both are aimed at addressing a threat that is getting more attention in security circles called prompt injection.
SEE ALSO: OpenAI updates its Privacy Policy
Prompt injection attacks are deceptively simple. A malicious actor hides instructions inside web content or connected data sources. When an AI system processes that content, it can be tricked into following those hidden instructions. In the worst case, that could mean leaking sensitive data or taking actions the user never intended.
As AI systems become more “agentic” and capable of interacting with other services, that risk becomes harder to ignore.
Lockdown Mode is OpenAI’s high security response. It is described as an optional setting intended for a relatively small group of users. Think executives at large companies, security teams, journalists handling sensitive information, or organizations that are already frequent targets of cyberattacks. Most everyday users will not need it.
When Lockdown Mode is enabled, ChatGPT’s ability to interact with external systems is tightly constrained. The restrictions are deterministic, meaning they are not left up to model judgment alone.
One example OpenAI gives is web browsing. In Lockdown Mode, browsing is limited to cached content. No live network requests leave OpenAI’s controlled infrastructure. That design choice is meant to prevent a scenario where sensitive data could be quietly sent to an attacker through a manipulated web page.
Some capabilities are disabled entirely if OpenAI cannot provide strong guarantees about data safety. That might sound restrictive, and it is. The point is to prioritize containment over convenience.
Lockdown Mode builds on protections that already exist in business tier plans, including role based access controls and audit logs. It is available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers. Workspace administrators can enable it by creating a specific role in Workspace Settings, then deciding which apps and which actions within those apps remain available.
That granularity matters. Many organizations rely on connected workflows. AI systems might interact with internal documents, customer databases, or productivity tools. Administrators may not want to shut everything down, but they may want strict limits on what is allowed.
SEE ALSO: ChatGPT adds advertising, and OpenAI asks for your trust
There is also integration with OpenAI’s Compliance API Logs Platform, which provides visibility into app usage, shared data, and connected sources. For companies concerned about regulatory compliance or internal oversight, that logging capability may be just as important as the Lockdown switch itself.
OpenAI says it plans to bring Lockdown Mode to consumers in the coming months. That is notable. It suggests the company sees advanced threat scenarios extending beyond enterprise boardrooms. Activists, researchers, and public figures may eventually have access to similar protections.
Alongside Lockdown Mode, OpenAI is standardizing how it flags higher risk features. Certain capabilities in ChatGPT, ChatGPT Atlas, and Codex will now carry an “Elevated Risk” label. Obviously, this is less about disabling features and more about transparency.
AI systems are often most useful when they have network access or deep integrations. In Codex, for example, developers can grant the assistant permission to access the network so it can look up documentation or interact with external services. That kind of functionality can save time and boost productivity.
But it also introduces exposure. With the Elevated Risk label, OpenAI aims to make those tradeoffs explicit. Users will see a clear notice explaining what changes when network access is enabled, what risks may be introduced, and when such access is appropriate.
It is a subtle but important shift. Instead of quietly enabling powerful capabilities, OpenAI is putting a caution sign in front of them. Some users will accept the risk in exchange for more flexibility. Others may decide the added exposure is not worth it, especially when working with private or regulated data.
The company also says it will remove the Elevated Risk label once it determines that safeguards have improved enough to mitigate those concerns for general use. In other words, the label is not meant to be permanent. It is a signal that the industry is still catching up to the implications of highly connected AI systems.
There is a broader pattern here. For the past couple of years, AI vendors have focused on adding more connectivity, more autonomy, and more integrations. That expansion made these systems dramatically more useful. It also made them more attractive targets.
Now we are seeing the second phase of that evolution. Security and control are moving closer to the forefront of product design.
For enterprises that have been cautious about letting AI interact with sensitive information, Lockdown Mode may provide some reassurance. For individual users who want maximum capability, the Elevated Risk label serves as a reminder that power and exposure often go hand in hand.
AI is no longer just a chat interface. It is becoming a workflow engine, a coding assistant, and in some cases, a decision support tool. As that transformation continues, the security conversation will only intensify. OpenAI appears to be acknowledging that reality and building guardrails accordingly