Cybercrime now targets people, not just systems. Chapter 1 exposes how hackers exploited human error at Marks and Spencer, triggering a £300 million breach. As AI adoption rises, trust and identity become the new battlegrounds—and our greatest vulnerability.
AndrenaM is transforming submarine detection with AI-powered sonar networks. Founded by ex-SpaceX and robotics engineers, the startup raised $10 million in just 36 hours to build real-time ocean surveillance tools for defense and commercial use.
Tech elites like Karp, Amodei, and Cannon-Brookes aren’t just building companies—they’re reshaping policy, energy, defense, and the global balance of power. As democratic institutions lag behind, a new techno-aristocracy is emerging to define the future of governance.
The Digital Siege Chapter 1: The Human Error Catastrophe – When Trust Becomes the Weapon
Cybercrime now targets people, not just systems. Chapter 1 exposes how hackers exploited human error at Marks and Spencer, triggering a £300 million breach. As AI adoption rises, trust and identity become the new battlegrounds—and our greatest vulnerability.
This article marks the start of a 10-chapter investigative series on the rise of AI-powered cybercrime and its global economic impact.
In 2025, the world faces more than just a surge in cybercrime — it is enduring a digital siege. From ransomware hitting critical infrastructure to AI-generated phishing campaigns hijacking industries, cyber threats have become a coordinated war on economic stability. At the center is artificial intelligence, now a weapon reshaping global conflict.
The Digital Siege investigates how AI, human error, and geopolitical agendas are converging to redefine cybercrime as a tool of economic disruption and statecraft. Drawing on expert insights from IBM, Gartner, Verizon, Barclays and others, this series explores the architecture of a crisis spanning boardrooms, supply chains, and entire nations.
We begin with the most vulnerable target — not systems, but people.
Chapter 1: The Human Error Catastrophe - When Trust Becomes the Weapon
The story of modern cybercrime begins not with sophisticated technology, but with a fundamental vulnerability that no firewall can protect: human trust. The recent attack on Marks and Spencer revealed how cyber criminals are now breaching people rather than systems, turning human instinct into their most effective weapon.
Stuart Machin Chief Executive at Marks and Spencer. BBC.
"We took our online system down ourselves to protect the website and customers,"
Stuart Machin, M&S Chief Executive, told reporters in May 2025, describing the aftermath of what would become Britain's most expensive cyber attack. The £300 million impact—equivalent to one-third of the retailer's annual profit—stemmed not from a technological breakthrough by hackers, but from what Machin described as "human error" at a third-party supplier.
The attack, attributed to the notorious English-speaking hacker group Scattered Spider, employed social engineering techniques that exploited human psychology rather than technological vulnerabilities.
Mark Hughes - Global Managing Partner, Cybersecurity Services at IBM. X post from DXC Tech conference.
This represents a fundamental shift in cybercriminal methodology, one that IBM's Global Managing Partner for Cybersecurity Services, Mark Hughes, captures succinctly:
"Cybercriminals are most often breaking in without breaking anything – capitalizing on identity and access management gaps proliferating from complex hybrid cloud environments. Compromised credentials offer attackers multiple potential entry points with effectively no risk".
The M&S incident is far from isolated. According to the Verizon Data Breach Investigations Report 2025, human elements remain involved in approximately 60% of all breaches, while third-party involvement has doubled from 15% to 30% in just one year. This dramatic increase in supply chain vulnerabilities reflects a systematic exploitation of trust relationships that bind the global economy together.
The financial implications extend far beyond immediate remediation costs. Research from Barclays suggests that 82% of businesses facing ransomware attacks ultimately pay the ransom, despite public statements to the contrary.
Lisa Forte – Partner at Red Goat Cyber Security. LinkedIn.
Lisa Forte from cybersecurity firm Red Goat observes,
"If the data never gets dumped, there's a high chance a ransom was paid".
This creates a perverse economic incentive structure where criminal enterprises can rely on predictable revenue streams from their attacks.
The human error factor is compounded by the rapid adoption of generative AI tools in corporate environments. The Verizon report reveals that 15% of employees routinely access GenAI systems on their corporate devices, with 72% using non-corporate emails as account identifiers and 17% using corporate emails without integrated authentication systems. This represents what cybersecurity experts term the "AI Security Paradox"—the same properties that make generative AI valuable also create unique security vulnerabilities that traditional frameworks cannot address.
“AI-enhanced malicious attacks are the top emerging risk for enterprises in the third quarter of 2024.”
This aligns with growing concern in high-risk industries like manufacturing, where the integration of AI into both IT and OT systems has expanded the potential for exploitation. As adversaries increasingly automate and personalize attacks, businesses face longer containment times, greater regulatory scrutiny, and rising pressure to reassess their risk exposure.
The median time to remediate leaked secrets discovered in GitHub repositories has reached 94 days, according to Verizon's analysis.
This extended exposure window provides ample opportunity for sophisticated threat actors to establish persistent access and conduct reconnaissance before launching their primary attacks. As we examine these human vulnerabilities, it becomes clear that the traditional approach of treating cybersecurity as a purely technological challenge is fundamentally flawed.
The next chapter will explore how artificial intelligence has not only amplified these human vulnerabilities but has also created entirely new categories of threats that exploit the intersection of human psychology and machine learning capabilities.
AndrenaM is transforming submarine detection with AI-powered sonar networks. Founded by ex-SpaceX and robotics engineers, the startup raised $10 million in just 36 hours to build real-time ocean surveillance tools for defense and commercial use.
Tech elites like Karp, Amodei, and Cannon-Brookes aren’t just building companies—they’re reshaping policy, energy, defense, and the global balance of power. As democratic institutions lag behind, a new techno-aristocracy is emerging to define the future of governance.
Sam Altman says humanity has crossed the AI event horizon. As artificial intelligence grows into a cognitive force, tasks once seen as miraculous are becoming routine. The world must now navigate ethical risks, job disruption, and a future shaped by superintelligence.
Amazon’s AI memo triggered a global reckoning as white collar jobs begin to vanish. Companies are embracing automation to boost efficiency while governments warn of a social fallout. The future depends on whether leaders invest equally in technology and the people it is set to replace.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!