The former Facebook insider building content moderation for the AI era
When Brett Levenson left Apple in 2019 to lead business integrity at Facebook, the social media giant was in the thick of the Cambridge Analytica fallout. At the time, he thought he could simply fix Facebook’s content moderation problem with better technology.
The problem, he quickly learned, ran deeper than technology. Human reviewers were expected to memorize a 40-page policy document that had been machine-translated into their language, he said. Then they had about 30 seconds per piece of flagged content to decide not just whether that content violated the rules, but what to do about it: block it, ban the user, limit the spread. Those quick calls were only “slightly better than 50% accurate,” according to Levenson.
“It was kind of like flipping a coin, whether the human reviewers could actually address policies correctly, and this was many days after the harm had already occurred anyway,” Levenson told TechCrunch.
That sort of delayed, reactive approach is not sustainable in a world of nimble and well-funded adversarial actors. The rise of AI chatbots has only compounded the problem, as content moderation failures have resulted in a string of high-profile incidents, like chatbots providing teens with self-harm guidance or AI-generated imagery evading safety filters.
Levenson’s frustration led to the idea of “policy as code” — a way to turn static policy documents into executable, updatable logic tightly coupled to enforcement. That insight led to the founding of Moonbounce, which announced it has raised $12 million in funding on Friday, TechCrunch has exclusively learned. The round was co-led by Amplify Partners and StepStone Group.
Moonbounce works with companies to provide an additional safety layer wherever content is generated, whether by a user or by AI. The company has trained its own large language model to look at a customer’s policy documents, evaluate content at runtime, provide a response in 300 milliseconds or less, and take action. Depending on customer preference, that action could look like Moonbounce’s system slowing down distribution while the content awaits a human review later, or it might block high-risk content in the moment.
Today, Moonbounce serves three main verticals: Platforms dealing with user-generated content like dating apps; AI companies building characters or companions; and AI image generators.
Techcrunch event
Disrupt 2026: The tech ecosystem, all in one room
Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.
Save up to $300 or 30% to TechCrunch Founder Summit
1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately
Offer ends March 13.
San Francisco, CA
|
October 13-15, 2026
REGISTER NOW
Moonbounce is supporting more than 40 million daily reviews and serving over 100 million daily active users on the platform, Levenson said. Customers include AI companion startup Channel AI, image and video generation company Civitai, and character roleplay platforms Dippy AI and Moescape.
“Safety can actually be a product benefit,” Levenson told TechCrunch. “It just never has been because it’s always a thing that happens later, not a thing you can actually build into your product. And we see our customers are finding really interesting and innovative ways to use our technology to make safety a differentiator, and part of their product story.”
Tinder’s head of trust and safety recently explained how the dating platform uses these types of LLM-powered services to reach a 10x improvement in accuracy of detections.
“Content moderation has always been a problem that plagued large online platforms, but now with LLMs at the heart of every application, this challenge is even more daunting,” Lenny Pruss, general partner at Amplify Partners, said in a statement. “We invested in Moonbounce because we envision a world where objective, real-time guardrails become the enabling backbone of every AI-mediated application.”
AI companies are facing mounting legal and reputational pressure after chatbots have been accused of pushing teenagers and vulnerable users toward suicide and image generators like xAI’s Grok have been used to create nonconsensual nude imagery. Clearly, safety guardrails internally are failing, and it’s becoming a liability question. Levenson said AI companies are increasingly looking outside their own walls for help beefing out safety infrastructure.
“We’re a third party sitting between the user and the chatbot, so our system isn’t inundated with context the way the chat it