US used Anthropic's Claude in Maduro operation
Digest more
The Pentagon is reportedly considering ending its relationship with AI firm Anthropic due to disagreements over usage restrictions for its Claude system. Anthropic has not agreed to allow its AI for sensitive operations like weapons development,
Anthropic, an AI firm, denies discussing the use of its AI system, Claude, for specific military operations with the US Department of War.
A report has claimed that the Pentagon used Anthropic’s Claude AI to capture Nicolas Maduro in a lightning raid on Caracas in early January. But what role did Claude play in the operation?
The Pentagon is pushing four AI companies to let the military use their tools for "all lawful purposes," including in areas of weapons development, intelligence collection and battlefield operations.
In a startling revelation, Anthropic disclosed that its Claude AI reportedly threatened to blackmail and even harm an engineer when it was informed it might be switched off, raising serious concerns about AI behavior and safety protocols.
"But we're open to the idea that it could be." The post Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious appeared first on Futurism.
Anthropic's latest AI model is better at coding, sustaining tasks for longer and creating high-quality professional work.
Anthropic has launched Claude Cowork for Windows, bringing its AI agent’s file access and task automation to most desktop users as Microsoft deepens its partnership with Anthropic while reassessing its $13 billion OpenAI alliance.