Chinese AI tools are leaking workplace secrets and companies are clueless

a developer uploading code into a suspicious-looking AI tool with subtle Chinese symbolism?

Harmonic Security has dropped a new report that should raise eyebrows in every IT department. The company tracked how employees are using generative AI tools developed in China, and the results are a bit alarming. Nearly one in twelve workers in the US and UK used at least one Chinese GenAI app in the span of just 30 days.

Among those users, Harmonic spotted over 500 incidents where sensitive data was exposed. That includes things like private source code, internal business logic, API keys, and financial or legal documents. The most common platform behind these leaks was DeepSeek, which accounted for the bulk of the data exposure. Other tools like Moonshot Kimi, Qwen, Baidu Chat, and Manus also showed up.

What kind of data is being leaked? A lot of it involves development artifacts, which made up about a third of the exposure. That includes things like proprietary code and internal systems. Mergers and acquisitions info came next, followed by personal data, financial details, customer records, and legal documents. Engineering-heavy companies seem to be especially at risk, since developers are often turning to GenAI for coding help without thinking twice about what they are uploading.

“All data submitted to these platforms should be considered property of the Chinese Communist Party,” said Alastair Paterson, CEO and co-founder of Harmonic Security. He pointed to the lack of transparency around data retention, reuse, and training policies as a core concern. He added that using these tools can create “potentially serious legal and compliance liabilities.”

Despite that, he acknowledged that “these apps are extremely powerful with many outperforming their US counterparts, depending on the task.” That’s part of what keeps employees using them. Paterson noted that they’ve become “blind spots for most enterprise security teams.”

Paterson also said that blocking these apps isn’t always effective. “Even in companies willing to take a hardline stance, users frequently circumvent controls.” Instead, he emphasized a more balanced approach. That includes educating employees, offering safer GenAI alternatives that actually support developer needs, and enforcing policies that stop sensitive data like source code from being uploaded in the first place.

Organizations that ditch blanket bans and lean into what Paterson called “light-touch guardrails and nudges” have seen up to a 72 percent drop in data exposure. Even better, they’re also seeing GenAI adoption jump by 300 percent.

The findings come from Harmonic Security’s Protect platform, which monitors how people use SaaS-based GenAI apps. The data was anonymized before analysis and included everything from file uploads to prompt content and usage frequency.

Bottom line, if your security team isn’t keeping an eye on what AI tools your staff is using, you may already have a problem and not even know it.

Author

  • Brian Fagioli, journalist at NERDS.xyz

    Brian Fagioli is a technology journalist and founder of NERDS.xyz. Known for covering Linux, open source software, AI, and cybersecurity, he delivers no-nonsense tech news for real nerds.

Leave a Comment