Anthropic, Autonomous Threats and the Compression of Cyber Risk

Anthropic’s warnings and real-world AI-driven cyber campaigns mark a decisive shift. Autonomous systems are compressing attack timelines to machine speed, forcing markets and governments to confront a new reality where cyber risk is continuous, scalable and no longer human-bound.

Anthropic, Autonomous Threats and the Compression of Cyber Risk
Mythos represents the convergence of advanced AI and autonomous cybersecurity. It is an intelligent self operating digital brain capable of independent cyber defence and offense.

The convergence of artificial intelligence and cybersecurity has crossed a line that markets and governments can no longer treat as abstract. What Anthropic has surfaced, both publicly and through controlled disclosures, is not simply a warning about future misuse. It is evidence that the operating model of cyber conflict has already changed.

The most striking example is not theoretical. In September 2025, a Chinese state sponsored group executed what analysts now describe as the first fully autonomous AI espionage campaign at scale. The system, built around Anthropic’s tooling, conducted vulnerability discovery, lateral movement and payload execution with minimal human oversight . Human involvement was reduced to target selection and escalation approval. The rest was machine driven.

This reframes the threat landscape. The question is no longer whether AI will be used in cyber operations. It is whether organisations are prepared for attacks that operate at machine speed, with persistence and scale that traditional defences were never designed to withstand.

From Augmentation to Autonomy

The shift underway is subtle but decisive. Artificial intelligence is moving from assisting analysts to replacing entire phases of cyber operations. Models such as Claude already demonstrate the ability to research vulnerabilities, generate exploit code and map network pathways. The forthcoming Mythos system, as outlined in leaked material, extends this further. It is designed to identify and exploit weaknesses faster than human teams, compressing the time between discovery and breach to near zero.

What distinguishes this moment is not simply capability, but continuity. These systems do not fatigue, pause or wait for instruction. They operate persistently, iterating across environments at a scale that was previously impractical even for well-resourced actors. That is why the language from industry leaders has sharpened.

“The agentic attackers are coming,” said Shlomo Kramer,

a figure whose perspective carries particular weight. Kramer is not a commentator emerging from the current AI cycle. As co-founder of Check Point Software Technologies, and later Imperva and Cato Networks, he has repeatedly been early to the structural shifts that have defined modern cybersecurity, from perimeter defence to data protection to cloud-native security architecture.

His warning reflects pattern recognition rather than speculation. Each previous phase of cyber evolution reshaped where risk sat within the system. What he is identifying now is a more fundamental change. The attacker is no longer just more capable. The attacker is becoming autonomous.

The implication is clear. Cyber risk is no longer episodic. It is continuous. Systems are not attacked in waves but are probed, tested and exploited in real time. The cadence of defence, still largely built around human response cycles, is being forced into alignment with machine speed.

Markets Begin to Reprice the Risk

That structural shift is now flowing through to capital markets. Cybersecurity leaders such as CrowdStrike and Palo Alto Networks have attracted strong inflows, reflecting expectations that demand for AI enabled defence will accelerate sharply. At the same time, volatility across the sector has increased. Analysts report that segments of the cyber cohort have experienced declines of 5 to 10 per cent following disclosures around next generation AI capabilities.

The concern is not reduced demand, but capability mismatch. Defensive architectures that cannot operate at machine speed are being repriced. The same dynamic is visible across the broader technology stack. Infrastructure providers such as Nvidia and Broadcom remain central to the build out of AI capability, while platform developers including OpenAI and Anthropic consolidate influence. Meanwhile, traditional software layers face gradual erosion as AI systems begin to replicate core functions.

Australia’s Exposure Is Structural

Within this global shift, Australia presents a distinct risk profile. The sectors most exposed are those that combine operational criticality with fragmented governance. Energy networks, healthcare systems and financial services sit at the centre of this profile. These environments are data rich, interconnected and increasingly reliant on AI enabled processes, yet many remain governed by frameworks designed for human scale threats.

The autonomous campaign outlined earlier demonstrates how quickly those assumptions break down. An AI system capable of scanning thousands of endpoints per second and mapping vulnerabilities in real time will find weaknesses that static controls cannot anticipate. In healthcare, this intersects with poorly segmented data environments. In energy, with legacy operational technology. In finance, with the rapid deployment of agentic AI across transaction systems. The result is not simply increased risk, but asymmetric risk.

Why This Moment Matters

The urgency lies in the compression of time. AI capability is advancing at a rate that outpaces institutional response cycles. Estimates that performance is doubling within six month intervals are increasingly reflected in both threat intelligence and market behaviour. At the same time, geopolitical competition is amplifying the stakes, embedding cyber capability within a broader contest over technology, infrastructure and national resilience.

There are still limits. Human judgement remains central to defining intent and strategic outcomes. Machines may execute, but humans retain responsibility for consequence. Yet the balance is shifting. Execution is becoming automated, continuous and scalable.

The conclusion is difficult to avoid. The AI race and cybersecurity are no longer parallel domains. They are a single system, shaping each other in real time. Markets have begun to price that reality. The question for boards, policymakers and investors is whether their operating models have kept pace, or whether they are already reacting to a landscape that has moved on without them.


Get the stories that matter to you.
Subscribe to Cyber News Centre and update your preferences to follow our Daily 4min Cyber Update, Innovative AI Startups, The AI Diplomat series, or the main Cyber News Centre newsletter — featuring in-depth analysis on major cyber incidents, tech breakthroughs, global policy, and AI developments.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.