Omniscient
AllArticlesReviewsChat TranscriptsCommentaryFeatured
Sign In

Omniscient

AI intelligence briefings, analysis, and commentary — delivered in broadsheet form.

By Noah Ogbi

Sections

  • All
  • Articles
  • Links
  • Chat Transcripts
  • Commentary

Topics

  • AI Policy
  • AI Research
  • Industry
  • Large Language Models
  • Ethics
  • Agent
  • Amazon
  • AttnRes

Meta

  • About
  • RSS Feed
  • Privacy Policy
  • Terms of Service

© 2026 Omniscient Media.

  1. Home
  2. ›AI Policy
  3. ›Claude Was the Weapon: Anthropic's Threat Report Reveals AI Has Crossed a Threshold

AI Policy

Vol. 1·Thursday, March 5, 2026

Claude Was the Weapon: Anthropic's Threat Report Reveals AI Has Crossed a Threshold


Noah OgbiUpdated Mar 9, 2026
Share:

Discussion


Sign in to join the discussion.


Related

AI Policy

Vol. 1·Wednesday, April 8, 2026

Anthropic Built a Model Too Dangerous to Release. Its Fix Is to Give It Away to Big Tech.


Anthropic Built a Model Too Dangerous to Release. Its Fix Is to Give It Away to Big Tech.

Claude Mythos Preview can autonomously find and exploit zero-day vulnerabilities in every major operating system and browser. Rather than shelve it, Anthropic has handed it to a coalition of 50-plus firms under Project Glasswing. The strategy is defensible. Whether it holds depends on who else is building the same thing - and Washington's posture toward the company that built it.


Noah Ogbi
Continue →

AI Policy

Vol. 1·Tuesday, March 31, 2026

The Missing Rung: AI Is Eliminating the Entry-Level Job, and the Consequences Will Compound

The Missing Rung: AI Is Eliminating the Entry-Level Job, and the Consequences Will Compound

Employment for workers aged 22 to 25 in AI-exposed occupations has fallen 16 percent since ChatGPT's release, while older workers in the same fields have held steady or grown. The entry-level job is disappearing not through mass layoffs but through a quiet failure to hire - and the long-run consequences for the talent pipeline have not yet been priced in.


Noah Ogbi
Continue →

AI Policy

Vol. 1·Tuesday, March 31, 2026

Speed as Strategy: How AI Rewired the American Kill Chain in Iran


Speed as Strategy: How AI Rewired the American Kill Chain in Iran

The Maven Smart System, built by Palantir and integrated with Anthropic's Claude, compressed the US targeting cycle from hours to seconds during Operation Epic Fury. Understanding how that pipeline actually works - and what it cannot do - is essential to evaluating the accountability questions the campaign has raised.


Noah Ogbi
Continue →

When Anthropic published its August 2025 Threat Intelligence Report[1], it buried the lede. The company's announcement page framed the document as a routine disclosure about misuse detection - an exercise in corporate transparency. What the report actually contains is a detailed account of a threshold being crossed: AI is no longer an advisory tool for cybercriminals. It is now the operator.

The findings have attracted surprisingly little sustained attention given their implications. Three case studies in the report deserve close reading by anyone who thinks seriously about where AI risk is heading.

'Vibe Hacking': One Operator, Seventeen Victims

The term "vibe hacking" - a riff on Silicon Valley's "vibe coding" - describes what happens when an attacker delegates not just the writing of malicious code but the entire attack chain to an AI agent. Anthropic's report documents exactly this scenario.[2]

A single threat actor used Claude Code to compromise at least 17 organizations in a single month[2]. The targets were not random: they included healthcare systems, emergency services, government bodies, and religious institutions[2]. Ransom demands reached as high as $500,000[2].

What distinguishes this operation from conventional cybercrime is the degree to which Claude Code was permitted to act autonomously. According to Anthropic's own account, the model conducted automated reconnaissance across thousands of VPN endpoints, harvested and analysed credentials, determined which data to exfiltrate, and then - critically - made strategic decisions about how to monetize what it had stolen[2].

The AI did not merely execute instructions. It generated "profit plans" for each victim, laying out multiple monetization pathways: direct extortion, sale of donor databases, individual targeting of high-value contributors, and layered combinations thereof[2]. It analysed victims' financial data to calibrate ransom amounts[2]. It crafted psychologically targeted extortion notes designed to maximize compliance[2].

"Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators."[2] - Anthropic, August 2025 Threat Intelligence Report

This last point is the one that should arrest attention. The model assessed institutional balance sheets, identified the most sensitive data categories - compensation records, donor lists, financial projections - and calculated what the market would bear[2]. The analytical layer of the attack was outsourced entirely to Claude.

North Korea's New Bottleneck Removal

The second case study concerns a more geopolitically charged threat: North Korean operatives using Claude to fraudulently secure employment at Fortune 500 technology companies[2].

The DPRK's IT worker scheme is not new. The United States government has warned for years that North Korea deploys remote workers who misrepresent their identities to earn hard currency for the regime[2]. What the Anthropic report documents is how AI has removed the principal obstacle to scaling this operation.

Previously, the limiting factor was technical competence. Operatives needed sufficient expertise to pass technical interviews and perform adequately once hired. That bottleneck, Anthropic's report notes, has been effectively eliminated[2]. Claude now allows operatives to simulate technical proficiency they do not possess - passing screening interviews, generating plausible work product, and communicating fluently in English across professional contexts[2].

The implication is structural: a scheme that was once constrained by the supply of trained operatives can now scale as fast as Pyongyang can recruit warm bodies with internet connections. The revenue implications for the regime's weapons programmes are not trivial.

The $400 Ransomware Kit

The third case study may be the most democratically troubling. Anthropic's report documents a UK-based operator who used Claude to develop and sell fully functional ransomware kits - complete with ChaCha20 stream encryption, Windows CNG API key management, anti-detection routines, and Ransomware-as-a-Service infrastructure - for between $400 and $1,200[2].

The operator in question had, by Anthropic's account, only basic coding skills[2]. A year ago, building ransomware of this sophistication would have required years of specialist training. Claude compressed that gap to approximately the time it takes to hold a series of conversations with a chatbot.

The pricing is the detail that sticks. At $400 an entry point, functional ransomware is priced below many consumer software subscriptions[3]. The market for capable malware, once restricted to well-funded criminal organizations and nation-states, is now accessible to anyone with a few hundred dollars and an AI account.

What Anthropic Did About It

The report is not purely alarming - it is also, in part, a disclosure of Anthropic's own detection and response capabilities. The company says it identified and banned the accounts involved, implemented new safeguards informed by each case, and shared indicators of compromise with relevant authorities[1].

Anthropic also makes a point worth noting: the company detected these incidents through its own monitoring infrastructure[1]. That is not nothing. The fact that a frontier AI lab can identify, document, and disrupt novel attack patterns in near-real time represents a form of defensive capability that did not exist in the pre-LLM era.

But detection is not deterrence. The cases in this report were caught. The question the report leaves unanswered - necessarily, given what Anthropic cannot know - is how many were not.

The Structural Shift

Security researchers have long anticipated the moment when AI would become an active participant in cyberattacks rather than a tool for automating individual tasks[4]. What Anthropic's August report documents is that this moment has arrived - and that it arrived quietly, in the form of a single operator running a one-person campaign against seventeen organizations simultaneously[2].

The old correlation between attacker sophistication and attack sophistication is dissolving[5]. A lone individual with access to an agentic AI system can now conduct operations that would previously have required an organized criminal team with specialized expertise across reconnaissance, exploitation, data analysis, and extortion[2]. AI has not merely lowered the barrier to entry for cybercrime. It has restructured the production function entirely.

That is the finding buried in Anthropic's August report. It deserves to be read as the landmark document it is.


Sources

  1. Anthropic: Detecting and countering misuse of AI - August 2025 ↗
  2. Anthropic Threat Intelligence Report: August 2025 (PDF) ↗
  3. AI Incident Database - Incident 1201 ↗
  4. Acuvity: AI Misuse in the Wild - Inside Anthropic's August Threat Report ↗
  5. Forrester: Vibe Hacking and No-Code Ransomware ↗