Report from the New York Times
In Brief – OpenAI says it has reached agreement with the US Department of Defense (DoD) to supply AI for classified systems in a manner that the company says addresses its opposition to the technology being misused in autonomous weapons and public surveillance. The announcement came shortly after the Secretary of Defense designated Anthropic, the only major AI developer certified for classified DoD projects and systems, a “supply chain risk” that rules it out of use by the Pentagon or defense contractors. Relations between Anthropic and the DoD fractured in recent weeks when the company resisted DoD contract language permitting unrestricted lawful use of its AI, a standard requirement, and instead demanded contract terms allowing it to block its AI from being used to support domestic surveillance or autonomous weapons. OpenAI agreed to allow the DoD to use its AI for any lawful purpose, but said it secured the right to install technical safeguards to ensure its systems adhere to internal safety principles. Although OpenAI’s services are not yet cleared for classified use, a new partnership with Amazon, which provides the government secure cloud services, could facilitate that transition. Google and xAI are reportedly also in discussions to begin doing AI business with the Pentagon.
Context – The Anthropic – DoD blowup brings together two ongoing themes in AI policy. First, it’s another example of science fiction narratives filling a policy vacuum when nobody knows how various AI systems will evolve and everyone involved is steeped in science fiction. That includes the LLMs. The chatbots have literally digested it all! Killer AI robot concerns tanking defense contracts and job apocalypse fan fiction tanking stocks. Then there is conservative opposition to Woke AI, the ideological cousin of Bay Area progressives running social media platforms and repressing conservatives. The prevailing view in the Trump Administration is that AI leadership is a national imperative to compete with China on strategic and security grounds, and they see Anthropic, with its focus on safety and regulation, as a public policy adversary.
