Anthropic’s Claude Used in US Kidnapping Operations Targeting Maduro
AI model reportedly deployed through Pentagon-linked systems during operation targeting President Nicolás Maduro
Venezuela | PUREWILAYAH.COM — The artificial-intelligence model Claude, developed by Anthropic, was reportedly used in a United States military operation in Venezuela that involved the kidnapping of President Nicolás Maduro, according to people familiar with the matter cited by The Wall Street Journal.
The reported operation included the abduction of Maduro and his wife, alongside the bombing of multiple sites in Caracas last month. The incident has intensified debate over the expanding role of artificial intelligence in US military actions and covert operations abroad.
AI Use Contradicts Anthropic’s Own Restrictions
Anthropic’s published usage policies explicitly prohibit Claude from being used to facilitate violence, develop weapons, or conduct surveillance. Despite these restrictions, the AI model was reportedly deployed as part of a US operation targeting the Venezuelan head of state.
“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told the Wall Street Journal.
The spokesperson added that any use of Claude must comply with the company’s usage policies and that Anthropic works with partners to ensure adherence.
The US Department of War declined to comment when questioned by the newspaper.
Role of Palantir in Deployment
Sources cited by the Wall Street Journal said the deployment of Claude occurred through Anthropic’s partnership with Palantir Technologies, whose data-integration platforms are widely used by the Pentagon and federal law-enforcement agencies.
The partnership reportedly enabled Claude’s integration into US defense systems, raising questions over how commercial AI tools are being embedded within military operations.
Contract Under Threat Amid Internal Pentagon Concerns
Anthropic’s concerns over how its AI model is being used by the Pentagon have reportedly prompted US officials to consider canceling a contract worth up to $200 million.
In comments to Axios, a source said Anthropic had asked whether its software was used in the raid to capture Maduro, triggering unease within the Department of War.
The source warned that any company questioning such operations could be seen as jeopardizing “the operational success of warfighters in the field,” suggesting that the partnership itself could be reevaluated.
Claude’s Precedent in Classified Military Operations
Anthropic was the first AI model developer whose technology was reportedly used in classified operations by the US Department of War. While it remains possible that other AI tools were employed for unclassified tasks during the Venezuela operation, the report highlights how AI systems are now deeply embedded across military functions — from document analysis to the control of autonomous drones.
The reported use of Claude underscores the accelerating integration of AI into US military strategy, a development seen by technology firms as crucial for market legitimacy and valuation.
Anthropic Chief Executive Dario Amodei has publicly warned about the risks posed by AI, particularly in autonomous lethal operations and domestic surveillance — two issues that have become central points of tension in the company’s negotiations with the Pentagon.
These constraints have reportedly escalated Anthropic’s broader dispute with the Trump administration, which favors minimal regulation and has accused the company of undermining its AI strategy by calling for stricter guardrails and limits on AI chip exports.
Broader Militarization of Commercial AI
Amodei and Anthropic’s co-founders previously worked at OpenAI, which — alongside Google’s Gemini — has since been incorporated into AI platforms for military personnel. According to official statements, those systems are used for analyzing documents, generating reports, and supporting research.
The Venezuela operation, however, has placed renewed scrutiny on how US military agencies are leveraging commercial AI tools in direct actions against sovereign states and their leadership. (PW)


