AI Upstart Listed as "Supply Chain Risk," Is the Pentagon Dividing American Tech Companies?

robot
Abstract generation in progress

Source: Global Times

[Global Times Report by Ni Hao] After the U.S. Pentagon labeled the emerging AI startup Anthropic as a “supply chain risk,” major tech giants in Silicon Valley have begun to diverge in their positions based on their own interests. Microsoft publicly supported Anthropic’s lawsuit against the Pentagon, while Google, amid conflicts between Anthropic and the Pentagon, took the opportunity to expand its infiltration into the latter.

The Financial Times reported on the 11th that on Tuesday, Microsoft publicly backed Anthropic, becoming the first tech giant to take sides in the dispute between Anthropic and the U.S. Department of Defense. In court documents, Microsoft warned that the “extreme” and “unprecedented” actions against this AI startup could have a “broad negative impact” on the U.S. tech industry. Microsoft requested a temporary restraining order to prevent the Department of Defense’s decision to list Anthropic as a “supply chain risk” from taking effect during the case.

“This conflict has caused a split in Silicon Valley.” The Financial Times noted that since the current U.S. administration took office, Silicon Valley tech giants have been very cautious to avoid openly confronting it.

According to reports from Forbes and other media outlets, Anthropic’s situation is being seen by its competitors as a good opportunity to penetrate the government market. Just one day after Anthropic sued the U.S. government on Monday, Google announced that its newly developed AI agents would be deployed in the Pentagon’s office environments for about 3 million military and civilian personnel, for tasks such as meeting minutes and task planning, outside of classified work. There are also reports that negotiations to expand this to classified and top-secret environments are underway.

Google is not the first company to expand cooperation with the Department of Defense after this conflict arose. Previously, after OpenAI was “blocked” by the Department of Defense, it quickly announced a cooperation agreement with the Pentagon, claiming that the agreement “has more security safeguards than any previous AI deployment agreement,” but this move was met with strong market backlash. Since the announcement of the partnership, OpenAI’s ChatGPT uninstall rates have surged.

Notably, some employees of OpenAI and Google have joined the camp opposing the Pentagon, stating that “the U.S. government is trying to sow fear to divide AI companies.”

Anthropic has been the only AI supplier operating within the Pentagon’s classified cloud environment until February 27, when it was designated as a “supply chain risk.” This rare measure is usually only applied to foreign competitors. On Monday, Anthropic filed a lawsuit against the Pentagon, claiming that the department’s actions are “unprecedented and illegal,” and have caused “irreparable harm” to the company.

Founded in 2021 by former OpenAI executives, Anthropic has rapidly become one of the fastest-growing tech startups in the U.S., with a valuation of $380 billion. The conflict between the two parties erupted in February this year. According to The New York Times, in its contract with the Pentagon, Anthropic drew two red lines: opposing AI being used for mass surveillance of Americans and being deployed in autonomous weapons with no human involvement. The Associated Press reported that U.S. Secretary of Defense Lloyd Austin issued a final ultimatum to Anthropic in February, demanding the company lift all restrictions and allow the military to use AI for “all lawful purposes,” but Anthropic refused. On February 27, the day the Pentagon listed Anthropic as a “supply chain risk,” former President Trump announced that he had ordered all federal agencies to immediately cease using Anthropic’s technology. On March 9, local time, Anthropic officially sued the U.S. government.

However, Reuters reported on the 12th that the Pentagon is easing restrictions on Anthropic. According to an internal memo recently leaked, if certain AI tools are deemed critical to U.S. national security, the Pentagon will allow some units to retain and use Anthropic’s products after the original six-month phase-out period. Analysts believe this reflects that most Pentagon suppliers find it difficult to exclude Anthropic from their supply chains. Reuters noted that although the Pentagon quietly opened a waiver channel, the memo still prioritizes removing Anthropic’s products from systems supporting critical missions, such as nuclear weapons and missile defense systems.

Dr. Brianna Rosen, Executive Director of the Network and Technology Policy Program at Oxford University’s Blavatnik School of Government, said that this dispute is widely seen as a conflict between AI ethics and national security, exposing the long-standing governance gaps in military AI applications. It also reflects that business contract mechanisms can no longer replace governance frameworks capable of adapting to AI’s use in warfare.

Nada Sanders, Professor of Supply Chain Management at Northeastern University, stated that Anthropic’s good cooperation with the Pentagon and being the first AI company to provide large language model technology for government classified networks makes its designation as a “supply chain risk” a very serious and unprecedented punishment for U.S. companies. She mentioned that labeling U.S. AI companies in this way—especially when it appears to be a form of retaliation for their negotiation positions—could hinder innovation. She added that companies might hesitate to develop such protective technologies if they risk being excluded from government markets due to safety or ethical measures.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin