Search
Close this search box.

OpenAI inks Pentagon deal: Sam Altman secures safety guardrails hours after Trump bans ‘woke’ anthropic

👇समाचार सुनने के लिए यहां क्लिक करें

In a stunning reversal for the role of Silicon Valley in national defence, OpenAI has made a breakthrough deal with the US Department of War (DoW) to deploy its artificial intelligence models on the military’s classified networks. The deal, announced by CEO Sam Altman Friday night, included exactly the “red line” safety principles that prompted President Donald Trump to effectively ban rival firm Anthropic just hours earlier.

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.

In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.

AI safety and wide distribution of…

No human in the loop for weapons

Central to the deal are the hard ethical guardrails that OpenAI demanded for the classified deployment. Altman confirmed the Department of War has agreed to two unalterable principles:

No domestic mass surveillance: A blanket ban on using OpenAI models to conduct mass surveillance of US citizens.

No human in the loop for weapons: A requirement that humans bear responsibility for any use of force, meaning that the models must not power any fully autonomous weapon systems.

“The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman said, noting the agency showed a “deep respect for safety” throughout the discussions.

Trump vs. Anthropic: The ‘woke’ AI fallout

OpenAI’s triumph comes after a whirlwind 24 hours for the AI industry. On Friday, President Trump directed all federal agencies to “immediately cease” using Anthropic’s technology, declaring the startup to be a “radical left, woke company” after its CEO, Dario Amodei, was unwilling to hand over its Claude models to the Pentagon without “any restrictions” on usage.

The Trump administration identified the startup as a “supply chain risk” — a designation usually reserved for foreign enemies — after Anthropic found itself similarly unwilling to bow to similar demands for safeguards against mass surveillance and autonomous lethal weapons.

A Threat of an Arrow and Technical Safeguards To keep the models from bypassing the limitations they agree to, OpenAI will engage in much the same “technical safeguards” — including the engagement of Forward Deployed Engineers (FDEs) and deploying on secure cloud networks alone.

Altman has told the Dept of War that it’s going to offer the same “reasonable” terms to all AI firms, though he also recognises the legal and political unravelling between the White House and Silicon Valley.

admin
Author: admin

और पढ़ें