Anthropic Submits Latest Court Filing Revealing Behind-the-Scenes Negotiations Behind Pentagon's Decision to Cut Cooperation

robot
Abstract generation in progress

Phoenix Technology News, March 21 — According to TechCrunch, AI company Anthropic has officially filed an affidavit in the California federal court, refuting the Pentagon’s allegations that it poses a “national security risk.” The lawsuit, triggered by the U.S. government’s unilateral decision to sever cooperation, is revealing more internal details. The latest court documents show that the two sides were actually close to reaching an agreement before fully breaking off relations.

According to documents submitted by Anthropic Policy Director Sara Hek, the Pentagon’s claims in court that “Anthropic demands approval authority for military operations” and concerns about “potential mid-operation shutdown of technology” were never mentioned during negotiations months before the dispute. More dramatically, on the day after the Department of Defense officially listed the company as a supply chain risk (March 4), Deputy Secretary of Defense Emily M. Mikkel sent an email to Anthropic CEO, clearly stating that the two sides were “very close” to reaching consensus on two core issues: autonomous weapons and large-scale surveillance targeting U.S. citizens. This starkly contrasts with the tough stance the U.S. government later conveyed to the public.

Regarding technical security concerns, Anthropic Public Sector Head Thiago Ramosami issued a technical rebuttal in a statement. He pointed out that once the AI large model Claude is deployed in government systems operated by third-party contractors, Anthropic has no access, and there are no remote shutdown switches or backdoors, making it technically impossible to interfere with military operations. Additionally, addressing the risk allegations related to employing foreign staff, the documents emphasize that employees involved in building confidential environment models have all passed U.S. government security clearance checks.

Currently, Anthropic insists in the lawsuit that this first-ever U.S. supply chain risk designation against a domestic company is essentially a First Amendment retaliation for publicly expressing AI safety views. Earlier this week, the U.S. government denied this in a 40-page document, claiming that excluding Anthropic was purely a national security decision, not a punishment for its speech. A hearing for this case is scheduled for March 24 in San Francisco.

(Edited by: He Chong)

【Disclaimer】This article reflects only the author’s personal views and is not related to Hexun.com. Hexun.com maintains neutrality regarding the statements and opinions expressed in this article and does not provide any explicit or implied guarantees regarding the accuracy, reliability, or completeness of the content. Readers are advised to use this information for reference only and to bear all responsibilities themselves. Email: news_center@staff.hexun.com

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin