Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Anthropic Submits Latest Court Filing Revealing Behind-the-Scenes Negotiations Behind Pentagon's Decision to Cut Cooperation
Phoenix Technology News, March 21 — According to TechCrunch, AI company Anthropic has officially filed an affidavit in the California federal court, refuting the Pentagon’s allegations that it poses a “national security risk.” The lawsuit, triggered by the U.S. government’s unilateral decision to sever cooperation, is revealing more internal details. The latest court documents show that the two sides were actually close to reaching an agreement before fully breaking off relations.
According to documents submitted by Anthropic Policy Director Sara Hek, the Pentagon’s claims in court that “Anthropic demands approval authority for military operations” and concerns about “potential mid-operation shutdown of technology” were never mentioned during negotiations months before the dispute. More dramatically, on the day after the Department of Defense officially listed the company as a supply chain risk (March 4), Deputy Secretary of Defense Emily M. Mikkel sent an email to Anthropic CEO, clearly stating that the two sides were “very close” to reaching consensus on two core issues: autonomous weapons and large-scale surveillance targeting U.S. citizens. This starkly contrasts with the tough stance the U.S. government later conveyed to the public.
Regarding technical security concerns, Anthropic Public Sector Head Thiago Ramosami issued a technical rebuttal in a statement. He pointed out that once the AI large model Claude is deployed in government systems operated by third-party contractors, Anthropic has no access, and there are no remote shutdown switches or backdoors, making it technically impossible to interfere with military operations. Additionally, addressing the risk allegations related to employing foreign staff, the documents emphasize that employees involved in building confidential environment models have all passed U.S. government security clearance checks.
Currently, Anthropic insists in the lawsuit that this first-ever U.S. supply chain risk designation against a domestic company is essentially a First Amendment retaliation for publicly expressing AI safety views. Earlier this week, the U.S. government denied this in a 40-page document, claiming that excluding Anthropic was purely a national security decision, not a punishment for its speech. A hearing for this case is scheduled for March 24 in San Francisco.
(Edited by: He Chong)
【Disclaimer】This article reflects only the author’s personal views and is not related to Hexun.com. Hexun.com maintains neutrality regarding the statements and opinions expressed in this article and does not provide any explicit or implied guarantees regarding the accuracy, reliability, or completeness of the content. Readers are advised to use this information for reference only and to bear all responsibilities themselves. Email: news_center@staff.hexun.com