Anthropic CEO Dario Amodei published a statement Tuesday to “set the file straight” on the corporate’s alignment with the Trump administration’s AI coverage, responding to what he referred to as “a latest uptick in inaccurate claims about Anthropic’s coverage stances.”
“Anthropic is constructed on a easy precept: AI needs to be a drive for human progress, not peril,” Amodei wrote. “Meaning making merchandise which are genuinely helpful, talking truthfully about dangers and advantages, and dealing with anybody critical about getting this proper.”
Amodei’s response comes after last week’s dogpiling on Anthropic from AI leaders and high members of the Trump administration, together with AI czar David Sacks and White Home senior coverage advisor for AI Sriram Krishnan — all accusing the AI large of stoking fears to wreck the trade.
The primary hit got here from Sacks after Anthropic co-founder Jack Clark shared his hopes and “applicable fears” about AI, together with that AI is a robust, mysterious, “considerably unpredictable” creature, not a reliable machine that’s simply mastered and put to work.
Sacks’s response: “Anthropic is working a complicated regulatory seize technique based mostly on fear-mongering. It’s principally chargeable for the state regulatory frenzy that’s damaging the startup ecosystem.”
California senator Scott Wiener, creator of AI safety bill SB 53, defended Anthropic, calling out President Trump’s “effort to ban states from performing on AI w/o advancing federal protections.” Sacks then doubled down, claiming Anthropic was working with Wiener to “impose the Left’s imaginative and prescient of AI regulation.”
Additional commentary ensued, with anti-regulation advocates like Groq COO Sunny Madra saying that Anthropic was “inflicting chaos for the whole trade” by advocating for a modicum of AI security measures as a substitute of unfettered innovation.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
In his assertion, Amodei stated managing the societal impacts of AI needs to be a matter of “coverage over politics,” and that he believes everybody desires to make sure America secures its lead in AI growth whereas additionally constructing tech that advantages the American folks. He defended Anthropic’s alignment with the Trump administration in key areas of AI coverage and referred to as out examples of occasions he personally performed ball with the president.
For instance, Amodei pointed to Anthropic’s work with the federal authorities, together with the agency’s offering of Claude to the federal government and Anthropic’s $200 million settlement with the Division of Protection (which Amodei referred to as “the Division of Warfare,” echoing Trump’s most popular terminology, although the identify change requires congressional approval). He additionally famous that Anthropic publicly praised Trump’s AI Motion Plan and has been supportive of Trump’s efforts to develop vitality provision to “win the AI race.”
Regardless of these reveals of cooperation, Anthropic has caught warmth from trade friends from stepping exterior the Silicon Valley consensus on sure coverage points.
The corporate first drew ire from Silicon Valley-linked officers when it opposed a proposed 10-year ban on state-level AI regulation, a provision that confronted widespread bipartisan pushback.
Many in Silicon Valley, together with leaders at OpenAI, have claimed that state AI regulation would decelerate the trade and hand China the lead. Amodei countered that the actual danger is that the U.S. continues to fill China’s information facilities with highly effective AI chips from Nvidia, including that Anthropic restricts the sale of its AI providers to China-controlled corporations regardless of income hits.
“There are merchandise we is not going to construct and dangers we is not going to take, even when they’d generate income,” Amodei stated.
Anthropic additionally fell out of favor with sure energy gamers when it supported California’s SB 53, a light-touch security invoice that requires the most important AI builders to make frontier mannequin security protocols public. Amodei famous that the invoice has a carve-out for corporations with annual gross income under $500 million, which might exempt most startups from any undue burdens.
“Some have instructed that we’re someway concerned with harming the startup ecosystem,” Amodei wrote, referring to Sacks’ publish. “Startups are amongst our most vital prospects. We work with tens of 1000’s of startups and accomplice with tons of of accelerators and VCs. Claude is powering a completely new technology of AI-native corporations. Damaging that ecosystem is unnecessary for us.”
In his assertion, Amodei stated Anthropic has grown from a $1 billion to $7 billion run-rate during the last 9 months whereas managing to deploy “AI thoughtfully and responsibly.”
“Anthropic is dedicated to constructive engagement on issues of public coverage. After we agree, we are saying so. After we don’t, we suggest an alternate for consideration,” Amodei wrote. “We’re going to maintain being trustworthy and simple, and can arise for the insurance policies we imagine are proper. The stakes of this expertise are too nice for us to do in any other case.”
