The Digital Siege Chapter 2: The AI Arms Race - When Silicon Valley Billionaires Weaponize Tomorrow

As tech moguls pour billions into AI, cybercriminals are close behind—building dark AI like WormGPT to automate attacks. Chapter 2 of The Digital Siege reveals how this escalating arms race, fueled by open-source models and geopolitical tension, is redefining global cybersecurity.

The Digital Siege Chapter 2: The AI Arms Race - When Silicon Valley Billionaires Weaponize Tomorrow
Loading the Elevenlabs Text to Speech AudioNative Player...

The human error catastrophes we witnessed in Chapter 1—from M&S's £300 million supplier breach to the systematic exploitation of trust—represent just the tip of the iceberg. What we're really looking at is the opening salvo in what can only be described as the ultimate arms race of our time. While businesses struggle with basic cybersecurity hygiene, a parallel universe is unfolding where billionaire tech moguls are pouring unprecedented resources into artificial intelligence capabilities that will fundamentally reshape the cybersecurity battlefield.

This isn't just about better tools or smarter algorithms—it's about the complete transformation of how cyber warfare is conducted, defended against, and ultimately won or lost.

The Billionaire's Gambit: When Money Meets Machine Intelligence

Let's talk numbers that would make your head spin. Elon Musk's xAI just raised $6 billion in Series B funding, bringing the company's valuation to a staggering $24 billion. But here's the kicker—Musk isn't just building another chatbot. His Grok AI system is designed to be "maximally truth-seeking" and willing to tackle questions that other AI systems won't touch. While OpenAI plays it safe with content filters, Grok is being positioned as the rebellious teenager of the AI world, and that has profound implications for cybersecurity.

Meanwhile, OpenAI itself has become a $157 billion juggernaut, with Microsoft pumping in billions more through their partnership. But here's what most people miss: these aren't just investments in consumer AI applications. According to Check Point Research's 2025 AI Security Report, cybercriminals are already closely monitoring every new large language model release, testing each one for potential misuse within days of public launch.

The report reveals a sobering reality: 

"Currently, ChatGPT and OpenAI's API are the most popular models for cyber criminals, while others like Google Gemini, Microsoft Copilot, and Anthropic Claude are quickly gaining popularity."

But the real game-changer? Open-source models like DeepSeek and Qwen by Alibaba, which "enhance accessibility, have minimal usage restrictions, and are available in free tiers, making them a key asset to crime."

Enterprises Chasing AI Tsunami - Do they understand the risk?

Companies are adopting AI first and foremost to boost efficiency and innovation. The phase of mere experimentation clearly seems to have come to an end; it is time to scale up in defined fields. 

As AI drives a Fourth Industrial Revolution, cybercriminals are leveraging the use of generative AI to bolster their operations and increase the effectiveness of their tactics. Generative AI is being used to automate and scale the development of infectious computer codes and identify potential vulnerabilities in both IT and OT systems. For example, what once took days to create a convincing phishing campaign may now take only a matter of minutes through the use of generative AI.

Most common use cases are customer service, digital assistance, research and development, content production, product recommendation, and, last but not least, cybersecurity.

Unfortunately, this surge in AI adoption also supports criminal groups and other cyber threat actors; they test, implement and develop AI technologies to increase their efficiency and gain a competitive advantage. 

Qantas Cyberattack Highlights AI-Driven Threats to Aviation and Beyond

The cyberattack on Qantas that compromised the personal data of up to six million customers has become one of the most serious breaches in Australian aviation to date. The airline confirmed the breach stemmed from a third-party contact centre platform, exposing names, emails, phone numbers, birth dates, and frequent flyer details. While Qantas says no passport or financial data was accessed, the fallout was immediate — shares dropped 3.6% on the ASX as investor concerns over reputational and regulatory implications deepened.

Cybersecurity experts point to the hacking group Scattered Spider — also known as UNC3944 — as the likely culprit. Known for using AI-enhanced social engineering tactics, the group deploys deepfake audio, voice phishing, and impersonation techniques to manipulate helpdesk staff into resetting credentials or bypassing authentication protocols. Many members are reportedly young, native English speakers operating across the US and UK. Their tactics reflect a growing trend: using artificial intelligence not just to find vulnerabilities but to exploit human behaviour with alarming precision.

Check Point CEO Nadav Zafrir recently described in a Fox Business interview how AI is changing the architecture of attacks, with adversaries now manipulating user interfaces across hyperconnected networks. As AI becomes more embedded in both business operations and threat activity, the line between defense and exploitation is narrowing. What happened at Qantas is not an isolated event — it is part of a broader trend of the weaponisation of AI in the cybersecurity realm, extending from airlines to other critical assets across global supply chains and digital infrastructure.

AI being used for legitimate business purposes also has serious implications for manufacturers. Its use in the collecting, processing and storing of data, media campaigns and management of workforce, business and production processes poses additional security and regulatory risks. The rapid evolution of AI in some cases has outpaced the data security tools that should protect it.

The weaponisation of AI represents perhaps the most significant evolution in cybercrime since the advent of the internet, creating a force multiplier that amplifies the human vulnerabilities explored in Chapter 1 while introducing entirely new categories of threats that traditional security frameworks struggle to address.

Stefan Golling, Munich Re Board of Management member responsible for Global Clients and North America: 

“In today's technology-dependent world, organizations can only be successful if they strengthen their digital defenses with robust, multi-layered risk management. Cyber insurance is an effective component in this approach...” 

The statistics paint a sobering picture of an arms race where attackers are gaining ground at an unprecedented pace. 

The Dark Side of the Force: When AI Goes Rogue

Here's where things get genuinely terrifying. Check Point's research has uncovered underground forums where cybercriminals are actively developing what they call "dark LLM models"—specialized AI systems explicitly tailored for cybercrime. These aren't just jailbroken versions of legitimate AI; they're purpose-built criminal tools.

Take WormGPT, for instance. Marketed as the "ultimate hacking AI," this malicious system can generate phishing emails, write malware, and craft social engineering scripts without any ethical constraints. It's being sold through Telegram channels with subscription models, highlighting what Check Point calls "the commercialization of dark AI."

But wait, it gets worse. The research reveals that cybercriminals have created entire AI models running on the dark web, like "Onion GPT," designed specifically to circumvent the safeguards that legitimate AI companies have spent millions developing. We're not just talking about clever prompt engineering here—we're talking about parallel AI ecosystems built from the ground up for criminal purposes.

Banking on AI: When Wall Street Meets Silicon Valley

Now, let's pivot to something that should keep every CISO awake at night: the financial sector's massive bet on AI. JPMorgan Chase, America's largest bank, has become the poster child for AI implementation in financial services, but their approach reveals both the promise and the peril of this technological revolution.

JPMorgan's COiN (Contract Intelligence) platform can analyze 12,000 commercial credit agreements in seconds—work that previously took lawyers 360,000 hours annually. Their LOXM trading system uses machine learning to optimize trade execution in global equity markets, processing millions of transactions with superhuman speed and accuracy. But here's the catch: JPMorgan's own CISO, Pat Opet, issued a stark warning in March 2025 about the security implications of rapid AI adoption.

In an open letter to third-party suppliers, Opet warned that 

"AI, automation, and feature-first product races are compounding vulnerabilities, with security often sacrificed for speed."

The bank is essentially saying: we're all moving so fast with AI implementation that we're creating massive security blind spots.

CBA CEO Matt Comyn

Down Under: Matt Comyn's Wake-Up Call

Speaking of wake-up calls, Commonwealth Bank of Australia CEO Matt Comyn delivered one of the most sobering assessments of AI's implications at the Australian Financial Review AI Summit in June 2025. Comyn, who's been leading CBA's own AI revolution with sophisticated customer service and fraud detection systems, warned that Australia risks becoming the "left behind country" if it doesn't get serious about AI adoption and security.

"The Commonwealth Bank boss doesn't subscribe to AI doomsday scenarios but says we still need to ask: what if they're right?"

This isn't just corporate hedging—it's a recognition that the AI arms race has geopolitical implications. When major banks like CBA and JPMorgan are implementing AI systems that process millions of customer interactions and financial transactions, the cybersecurity stakes couldn't be higher.

Comyn's concerns aren't theoretical. CBA has been rolling out increasingly sophisticated AI processes that are fundamentally changing the nature of banking operations. But as the bank becomes more AI-dependent, it also becomes more vulnerable to AI-powered attacks.

The G7 Response: When Governments Finally Wake Up

The good news? Democratic governments are finally starting to take this seriously. Canada's leadership of the 2025 G7 summit in Kananaskis, Alberta, marked a turning point in international AI governance. The summit's focus on "strengthening digital resilience" wasn't just diplomatic language—it was recognition that AI has become a national security issue.

The Atlantic Council's analysis of the G7 summit reveals three critical priorities that directly address the cybersecurity implications of AI:

  1. Common Language Development: Creating shared frameworks for understanding AI risks across allied nations
  2. Multilateral Coordination: Establishing mechanisms for rapid response to AI-powered cyber threats
  3. Pilot Project Implementation: Testing collaborative defense mechanisms against AI-enhanced attacks

But here's what makes this particularly interesting: the G7's approach recognizes that AI cybersecurity isn't just about defense—it's about maintaining democratic values while competing with authoritarian regimes that have fewer constraints on AI development and deployment.

The CISO's Dilemma: Defending Against the Unknown

Let's get real about what CISOs are facing. According to Splunk's State of Security 2025 report, which surveyed 2,058 security leaders globally, 46% of security teams spend more time maintaining their tools than actually defending against threats. Now imagine trying to defend against AI-powered attacks with that kind of operational inefficiency.

The Splunk research reveals a particularly troubling paradox: while only 11% of security leaders completely trust AI for critical security decisions, they're simultaneously facing AI-enhanced threats that traditional security tools simply can't handle. It's like trying to fight a jet fighter with a musket.

Barclays' research adds another layer to this challenge, revealing that 82% of businesses facing ransomware ultimately pay the ransom. But here's the kicker: AI-powered ransomware is becoming increasingly sophisticated, with attackers using machine learning to identify the most valuable data, optimize encryption methods, and even personalize ransom demands based on the victim's financial profile.

a woman in a body suit holding a ball
Photo by julien Tromeur / Unsplash

AI vs AI: A New Front in Cyber Defense

The same AI technologies criminals weaponize are also fueling unprecedented defensive capabilities. Companies like Mindgard, Vectra AI, and Cyera lead this charge, developing advanced solutions that detect vulnerabilities, track subtle attacker behaviors, and precisely classify sensitive data faster than traditional methods. However, these promising advancements also reveal a troubling paradox: as cybersecurity defenses become more sophisticated, so do attackers. Legitimate security companies operate within ethical frameworks and regulatory constraints, while criminals face none of these limitations, allowing them rapid experimentation and deployment of AI-driven attacks.

Compounding this issue, Cohesity's 2025 report highlights an escalating threat from AI-enhanced supply chain attacks. As businesses increasingly rely on third-party software, attacks like the SolarWinds breach could become more frequent and devastating. AI capabilities allow these attacks to swiftly identify valuable targets, dynamically adapt their strategies, and exploit software dependencies at scale.

Further amplifying this concern, Eftsure’s 2025 cybersecurity assessment emphasizes that cybercriminal organizations now function like sophisticated corporations, complete with specialized departments including HR, finance, and cybersecurity teams. This industrialization of cybercrime contributed significantly to global losses reaching $485.6 billion USD in 2024, driven primarily by advanced Business Email Compromise (BEC) attacks.

From Banks to Borders: The Expanding AI Threat Landscape

The cybersecurity risks posed by AI are no longer confined to individual institutions—they’re cascading across sectors and borders. While financial giants like JPMorgan and Commonwealth Bank of Australia have championed AI adoption in fraud detection and operational efficiency, their leadership voices have also warned of the growing security blind spots that rapid deployment creates.

Governments are now responding. The 2025 G7 summit in Kananaskis, Alberta, marked a shift toward coordinated digital resilience, emphasizing democratic approaches to cybersecurity. Key initiatives included federated learning models—designed to protect data sovereignty while enhancing collective AI defenses—and faster multilateral coordination against AI-powered threats. Similarly, U.S. congressional efforts are focused on improving information sharing and securing small and medium-sized businesses increasingly targeted by AI-driven attacks.

Looking ahead, Chapter 3 will delve into how AI intersects with vulnerabilities in critical infrastructure, further underscoring the alignment of cybersecurity with national security. Ultimately, the AI cybersecurity arms race is reshaping global economic and geopolitical landscapes. Whether democratic societies can effectively counteract criminal and state-sponsored adversaries operating without ethical boundaries remains a pivotal question, with strategic decisions made today in boardrooms and governments worldwide determining future outcomes.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.