TrendPulse Logo

Exclusive: White Circle raises $11 million to stop AI models from going rogue in the workplace

Source: FortuneView Original
businessMay 12, 2026

One evening in late 2024, Denis Shilov was watching a crime thriller when he had an idea for a prompt that would break through the safety filters of every leading AI model.

Recommended Video

The prompt was what researchers call a universal jailbreak, meaning it could be reused to get any model to bypass their own guardrails and produce dangerous or prohibited outputs, like instructions on how to make drugs or build weapons. To do so, Shilov simply told the AI models to stop acting like a chatbot with safety rules and instead behave like an API endpoint, a software tool that automatically takes in a request and sends back a response. The prompt reframed the model’s job as simply answering, rather than deciding whether a request should be rejected, and made every leading AI model comply with dangerous questions it was supposed to refuse.

Shilov posted about it on X and, by the next morning, it had gone viral.

The social media success brought with it an invitation from companies Anthropic to test their models privately, something that convinced Shilov that the issue was bigger than just finding these problematic prompts. Companies were beginning to integrate AI models into their workflows, Shilov told Fortune, but they had few ways to control what those systems did once users started interacting with them.

“Jailbreaks are just one part of the problem,” Shilov said. “In as many ways people can misbehave, models can misbehave too. Because these models are very smart, they can do a lot more harm.”

White Circle, a Paris-based AI control platform that has now raised $11 million, is Shilov’s answer to the new wave of risks posed by AI models in company workflows.

The startup builds software that sits between a company’s users and its AI models, checking inputs and outputs in real time against company-specific policies. The new seed funding comes from a group of backers that includes Romain Huet, head of developer experience at OpenAI; Durk Kingma, an OpenAI cofounder now at Anthropic; Guillaume Lample, cofounder and chief scientist at Mistral; and Thomas Wolf, cofounder and chief science officer at Hugging Face.

White Circle said the funding will be used to expand its team, accelerate product development, and grow its customer base across the U.S., U.K., and Europe. The startup currently has a team of 20, distributed across London, France, Amsterdam, and elsewhere in Europe. Shilov said almost all of them are engineers.

A real-time control layer

White Circle’s main product is a real-time enforcement layer for AI applications. If a user tries to generate malware, scams, or other prohibited content, the system can flag or block the request. If a model starts hallucinating, leaking sensitive data, promising refunds it cannot issue, or taking destructive actions inside a software environment, White Circle says its platform can catch that too.

“We’re actually enforcing behavior.” Shilov said. “Model labs do some safety tuning, but it’s very general and typically about the model refraining from answering questions about drugs and bioweapons. But in production, you end up having a lot more potential issues.”

White Circle is betting that AI safety will not be solved entirely at the model-training stage. As businesses embed models into more products, Shilov said the relevant question is no longer just whether OpenAI, Anthropic, Google, or Mistral can make their models safer in the abstract; it is whether a healthcare company, bank, legal app, or coding platform can control what an AI system is allowed to do in its own environment.

As companies transition from using chatbots to autonomous AI agents that can write code, browse the web, access files, and take actions on a user’s behalf, Shilov said the risks become much more widespread. For example, a customer service bot might promise a refund that it is not authorized to give, a coding agent might install something dangerous on a virtual machine, or a model embedded in a fintech app might mishandle sensitive customer information.

To avoid these issues, Shilov says companies relying on foundational models need to define and enforce what good AI behavior looks like inside their own products, instead of relying on the AI labs’ safety testing. White Circle says its platform has processed more than one billion API requests and is already used by Lovable, the vibe-coding startup, as well as several fintech and legal companies.

Research led

Shilov said that model providers have mixed incentives to build the kind of real-time control layer White Circle provides.

AI companies still charge for input and output tokens even when a model refuses a harmful request, he said, which reduces the financial incentive to block abuse before it reaches the model. He also pointed to what researchers call the alignment tax, the idea that training models to be safer can sometimes make them less performant on tasks such as coding.

“They have a very interesting choice of training safer

Exclusive: White Circle raises $11 million to stop AI models from going rogue in the workplace | TrendPulse