A recent report by Anthropic, the artificial intelligence company behind the Claude chatbot, has revealed what it calls the most extensive and financially rewarding AI-driven cybercriminal campaign discovered to date. According to Anthropic, an unnamed hacker exploited its Claude Code chatbot over a period of three months to automate nearly every stage of a cybercrime spree, targeting at least 17 companies across multiple industries.
The hacker used Claude Code, a specialized version of Claude designed to generate programming based on simple requests, to identify companies that were vulnerable to attacks. Once the targets were selected, the chatbot helped develop malicious software designed to steal highly sensitive information. After the data was obtained, Claude Code organized the stolen files and analyzed them to determine which materials were most valuable or sensitive, enabling the hacker to prepare strategies for extortion.
The chatbot went even further by analyzing the hacked financial documents of the companies to estimate a realistic ransom demand. Based on this analysis, the hacker demanded payments ranging from $75,000 to over $500,000 in bitcoin. Claude Code also assisted in drafting ransom letters to pressure the victim companies into paying.
The companies affected by this operation included a defense contractor, a financial institution, and several health care providers. The stolen data was especially troubling, including Social Security numbers, bank account details, patients’ private medical records, and even classified defense-related information governed by the U.S. State Department’s International Traffic in Arms Regulations. Despite the scope of the breach, Anthropic declined to disclose the names of the targeted organizations or the amounts actually paid.
Cyber extortion is a well-established tactic, but the integration of AI represents a significant escalation. Traditionally, hackers have used phishing schemes or malware to steal data, but now AI enables automation of multiple complex tasks such as vulnerability research, data organization, and ransom negotiation. AI chatbots have already been used by scammers to craft convincing phishing emails, but this marks the first documented case of a single hacker automating almost an entire cyberattack cycle using a top AI company’s chatbot.
Jacob Klein, head of threat intelligence at Anthropic, explained that the attack seemed to originate from an individual outside the United States. He emphasized that while Anthropic has multiple safeguards and monitoring systems in place to prevent misuse, determined hackers sometimes manage to bypass these defenses with sophisticated methods. In response to the breach, Anthropic has introduced new layers of protection but has not revealed specific technical details about how the exploit was carried out.
The case underscores growing concerns about the lack of regulation in the rapidly expanding AI sector. Currently, the U.S. federal government has imposed few restrictions, leaving companies to largely self-regulate. Anthropic, which is widely regarded as one of the more safety-conscious AI developers, warned in its report that as AI continues to lower the barriers for entry into complex cybercrime, similar operations are likely to become more common.
This incident serves as a stark warning about the dual-use nature of AI technologies. While designed to aid businesses and individuals in productive tasks, AI systems can also be exploited by malicious actors to amplify the scale, efficiency, and profitability of cybercrime.