
Cloudflare has added new capabilities to Cloudflare One, its Zero Trust platform. The goal is to help companies manage the growing risks that come with employees using generative AI on the job.
Teams across finance, marketing, engineering, and design are already using AI to speed up tasks and create new tools. The problem is that many workers are adopting these apps without thinking about security. Sensitive data can easily end up pasted into a chatbot, or engineers may launch AI-driven apps without security oversight. Cloudflare wants to make AI adoption safer while still keeping it productive.
Matthew Prince, Cloudflare’s CEO and co-founder, said the company is “the only one today that can offer the security of a Zero Trust platform with a full set of AI and inference development products, backed by the scale of a global network.”
The new features are part of what Cloudflare calls AI Security Posture Management. The idea is to give businesses insight into how employees are using AI, while giving security teams the tools to control that usage.
The Shadow AI Report is one of the headline additions. It lets companies see which AI apps employees are using and what they are doing with them. Cloudflare Gateway then enforces policies on that activity. A business could block unapproved AI apps, limit the types of data uploaded, or review whether tools meet privacy requirements.
AI Prompt Protection is another piece. It flags risky interactions with AI models. For example, it can warn or block employees if they try to feed source code or customer data into a chatbot. That control happens at the prompt level, before the data leaves the organization.
Cloudflare is also introducing Zero Trust MCP Server Control. It centralizes AI model tool calls into one dashboard, making it easier to track and manage. Security teams can then apply user-level or server-level policies with more precision.
Cloudflare sees this as a way to balance speed and safety. AI can boost productivity, but without controls it can also create big security problems. The company believes its approach allows businesses to embrace AI without losing visibility or oversight.