Trump’s AI policy team came into office opposing everything Biden did. Now it’s on the cusp of implementing many of the same policies
When it comes to AI, the Trump Administration has largely positioned itself as the opposite of the Biden White House—criticizing what Trump’s tech policy advisors saw as overly burdensome AI safety efforts and licensing regimes, and embracing an anti-regulation approach. Former Trump “AI and crypto czar” David Sacks best embodied this policy ethos.
But the Trump Administration, according to multiple news reports, is now about to engage in a head-spinning policy pirouette. Driven by concerns about the national security implications of Anthropic’s new “Mythos” AI model, with its ability to identify and exploit cyber security vulnerabilities—as well as broader fears around cyber capabilities and dangerous misuse—the administration is now reportedly considering oversight for advanced AI models. The policies under discussion, according to news reports, include an executive order that would create a government-industry working group to examine how frontier AI systems should be evaluated before release.
Recommended Video
At the same time, the Center for AI Standards and Innovation (CAISI) — the Trump administration’s renamed version of the Biden-era United States AI Safety Institute — announced partnerships with Google, Microsoft, and xAI to evaluate some AI models before deployment.
According to an agency press release, CAISI’s agreements with frontier AI developers “enable government evaluation of AI models before they are publicly available, as well as post-deployment assessment and other research.” The agency said it has completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased.
In an interview on Fox Business this morning, White House National Economic Council Director Kevin Hassett said the administration is studying a possible executive order that would create “a clear road map” for how advanced AI systems should be evaluated before release.
“We’re studying possibly an executive order to give a clear road map to everybody about how this is going to go and how future AIs that also could potentially create vulnerabilities should go through a process so that they’re released to the wild after they’ve been proven safe — just like an FDA drug,” Hassett said. “Mythos is the first, but it’s incumbent on us to build a system so U.S. AI can be the leader in AI and be safe at the same time. That’s really pretty much what we’re working on almost full-time right now.”
From criticizing oversight to championing it
The current debate carries with it a strong sense of déjà vu. The original U.S. AI Safety Institute was created by Joe Biden through his November 2023 AI Executive Order, with the goal of helping the federal government evaluate and better understand frontier AI systems from companies like OpenAI, Anthropic, and Google. The order also invoked the Defense Production Act to require companies training the largest AI models to share certain safety testing results with the government.
In other words, the administration that once criticized Biden’s AI oversight efforts is now considering adopting broadly similar policies, even though the original U.S. AI Safety Institute was systematically rebranded and restructured (the word “safety” was notably removed) and its inaugural director, Elizabeth Kelly, stepped down shortly after Trump’s inauguration in January 2025. (She subsequently joined Anthropic as head of “beneficial deployments,” one of several hires of former Biden officials that may have contributed to the acrimonious relationship between Trump’s tech policy team and Anthropic.)
At the end of April, Chris Fall, who served as an Energy Department official in the first Trump administration, was tapped to lead the rebranded CAISI, with a Commerce Department spokesperson saying “Dr. Fall brings the scientific leadership needed to ensure America leads the world in evaluating frontier AI models and advancing the technical standards that protect our national and economic security.” Fall replaced Collin Burns, a former member of Anthropic’s technical staff, who was dismissed from his position after just days on the job, with unnamed Trump administration officials telling reporters that they had not been informed of Burns’ appointment.
Fall spent nearly four years as vice president for applied sciences at technology research nonprofit MITRE.
“The is a 180 for the Trump administration, that has very explicitly been anti-any sort of regulation and also has explicitly tried to block states from enacting any kind of regulation,” said Rumman Chowdhury, an CEO of Humane Intelligence and former US Science Envoy for AI.
A focus on national security risks
Still, the renewed push for evaluations is being framed less around AI ethics concerns and worry about existential dangers, which was a strong focus of the Biden Administration, and more around immediate national security risks.
That backdrop includes the uproar over Anthropic’s Mythos model and a broader shift in Washing