The AI Race: Power, Control, and the Quiet Architecture of Risk

Anthropic’s rise is no longer about models, but control. As it embeds across enterprise, leaked code reveals deep telemetry, remote overrides and emerging autonomy. Industry leaders warn the same systems reshaping business may amplify cyber risk beyond current defences.

The AI Race: Power, Control, and the Quiet Architecture of Risk
This image portrays Anthropic as the rapidly dominant AI system transforming global enterprise. Claude serves as the intelligent network infrastructure connecting and powering the world’s corporate skyscrapers.

There is a moment in every technological cycle when scale overtakes narrative. For Anthropic, that moment has arrived.

In the space of four months, the company has moved from roughly $9 billion in annualised revenue to more than $30 billion, an acceleration that would be improbable in any other sector, let alone one still defining its own rules. What distinguishes this surge is not simply velocity, but direction. This is not consumer exuberance. It is enterprise capture.

At the centre sits Dario Amodei, increasingly visible on the global circuit, including quiet engagements in Australia and across the Global South. The strategy is neither subtle nor defensive. It is infrastructural. While OpenAI continues to dominate mindshare, Anthropic is embedding itself into the operational fabric of institutions, from private equity portfolios to Fortune-tier enterprises.

The numbers tell part of the story. More than 1,000 customers now spend over $1 million annually. Eight of the Fortune 10 are already inside the system. The rest is being built beneath the surface. The long-term compute arrangement with Broadcom and Google, securing access to 3.5 gigawatts of TPU capacity, is not simply a supply agreement. It is a declaration of permanence.

Anthropic is not renting the future. It is underwriting it.

Yet the same week that confirmed its industrial scale exposed something more ambiguous. The accidental release of Claude Code’s internal architecture offered an unfiltered view into how these systems actually behave. Not in theory, but in practice.

What emerged was not a failure of security, but a revelation of design philosophy.

Persistent telemetry. Hourly communication with central servers. Remote configuration toggles that can alter behaviour without user initiation. Multiple kill-switch pathways capable of bypassing permissions or shutting down execution environments. Data retention structures that extend for years. And beneath it all, the presence of internal tools and modes not available to the public, including autonomous agent capabilities and systems designed to obscure AI involvement in external environments.

Anthropic has long positioned itself as the industry’s most safety-conscious developer. The code suggests something more complex. Safety, in this context, appears inseparable from control.

This is the paradox now defining the AI race. The systems that are most powerful are also those that require the deepest access. Filesystems, terminals, enterprise workflows, developer environments. To function, they must be trusted. To be trusted, they must be governed. And to be governed, they must be observable and, if necessary, interruptible.

The result is an architecture that begins to resemble something closer to a managed intelligence layer than a tool. That may be precisely the point.

Enterprise adoption is not driven by novelty. It is driven by reliability, auditability, and the ability to intervene. From that perspective, telemetry and remote control are not aberrations. They are features. The same mechanisms that raise concerns for developers enable compliance for institutions.

But there is a second order effect.

As these systems move from assistance to autonomy, from copilots to agents, the boundary between user and operator begins to blur. The emergence of internal projects such as persistent autonomous modes and memory consolidation frameworks signals a shift towards systems that act, not simply respond.

Dario Amodei, CEO and co-founder of Anthropic. Source: AP Photo/Markus Schreiber

Industry Leaders Sound the Alarm

In an interview with Axios, Dimon said he had been briefed on Anthropic's unreleased Mythos model and warned it could "dramatically increase the ability of hackers or foreign adversaries to carry out potentially catastrophic attacks."

"AI makes cyber — and these [AI agents] make cyber — far worse," Dimon said.

Arora, in an essay published on Palo Alto Networks' website titled "Weaponised Intelligence," wrote that the frontier model capabilities "are no longer theoretical" and cautioned that "a single bad actor will now be able to run campaigns that once required entire teams."

He called the moment "the cybersecurity industry's most consequential" and urged defenders to integrate the same AI capabilities into their tools:

"Attackers have access to this technology, and so do we. The strategy is clear: we must fight AI with AI."

The quiet withdrawal of the “Mythos” model from public view reinforces this tension. Demonstrations reportedly pointed towards capabilities that exceed current disclosure frameworks. Not necessarily unsafe, but not yet governable in a way that satisfies public scrutiny.

This is where the narrative becomes less comfortable.

Anthropic is building what may be the most commercially successful enterprise AI platform in the market. It is also constructing one of the most tightly controlled execution environments ever deployed at scale. These two facts are not in conflict. They are causally linked.

The question is not whether this model will succeed. It already is.

The question is whether the architecture of control required to sustain it will remain aligned with the expectations of those who use it.

For Australia and its peers, this is not an abstract concern. As sovereign AI strategies accelerate and partnerships deepen, the choice is no longer between adopting AI or not. It is about which systems are allowed to sit closest to critical infrastructure, and under what terms.

Why now matters.

Because the shift from model capability to system integration is happening faster than governance can respond. Because enterprise dependency is forming before transparency standards are agreed. And because the companies shaping this layer are no longer experimenting. They are industrialising.

What comes next will not be defined by benchmarks or model releases. It will be defined by architecture. Who controls it. Who can see it. And who, if required, can turn it off.

That is the real frontier of the AI race.


Get the stories that matter to you.
Subscribe to Cyber News Centre and update your preferences to follow our Daily 4min Cyber Update, Innovative AI Startups, The AI Diplomat series, or the main Cyber News Centre newsletter — featuring in-depth analysis on major cyber incidents, tech breakthroughs, global policy, and AI developments.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.