San Francisco – A federal judge on Tuesday expressed skepticism over the government’s retaliation against AI powerhouse Anthropic, which has sued over its designation as a national security risk because of a dispute over military applications.
Anthropic filed a lawsuit this month in San Francisco seeking to overturn the designation, which is typically reserved for organizations from unfriendly foreign countries and could seriously handicap its popular AI model Claude.
It had refused to let the Pentagon use Claude for pursuing autonomous lethal warfare and mass surveillance of Americans, but the government rejected Anthropic’s proposed safeguards.
Claude is the Pentagon’s most widely deployed frontier AI model and the only such model currently operating on its classified systems.
After a short hearing, Judge Rita Lin said she would issue her ruling “in a few days.”
But she noted that “what is troubling to me about these reactions” by the government “is that they don’t really seem to be tailored to the stated national security concern.”
If the military was not happy with the Anthropic contract, the government “could just stop using Claude,” she said.
“It looks like defendants went further than that because they were trying to punish Anthropic…for criticizing the government’s contracting position in the press,” she added, “which of course would be a violation of the First Amendment.”
She also questioned a claim by a US government lawyer during the hearing that Anthropic’s ethical fears raised the risk that the company could intervene in military operations.
“I’m just wondering why it is that someone questioning the way things work leads to suspicion that they might build a back door,” Lin said.
Follow African Insider on Facebook, X and Instagram
Picture: Pexels
For more African news, visit Africaninsider.com
Source: AFP

