AI Is Fueling a New Wave of Cyber Threats and the Tools to Fight Them

AI is fueling a new wave of cyber threats—but it's also powering the tools to stop them. From privacy concerns and energy strain to predictive security and autonomous defence, this article explores how businesses are adapting to the dual impact of AI in 2025. 

Computer screen with cybersecurity dashboard defends against incoming digital threats in a glowing, high-tech data environment.

As artificial intelligence becomes deeply embedded in business operations, cybersecurity experts are sounding the alarm over data privacy, energy consumption, and the risks of unchecked adoption.

DeepSeek’s release of the R1 model marked a shift in public access to powerful AI reasoning. Offering free, unrestricted access, it significantly lowered the cost of entry—but also raised critical data security concerns. According to the company’s privacy policy, the app collects user prompts, chat history, uploaded files, keystrokes, and voice inputs, all stored off-site. This has led to situations where employees unwittingly input sensitive company data before internal security teams are aware.

“The moment a tool like this goes viral, employees can begin feeding sensitive company data into it before security teams are even aware,” said one industry analyst.

AI Tools Raise Business Risks Around Data Privacy and Energy Use

In Australia, the national science agency CSIRO estimates the AI market will reach $315 billion by 2028. However, public sentiment remains cautious, with 67% of Australians expressing discomfort about AI privacy protections.

On a global scale, the infrastructure supporting AI is placing unprecedented pressure on energy systems. Data centre electricity consumption has been surging since 2023, driven by exploding demand for AI. Deloitte predicts data centres will make up about 2% of global electricity consumption, or 536 terawatt-hours (TWh), in 2025, with projections that global data centre electricity consumption could roughly double to 1,065 TWh by 2030. This growth is prompting a shift towards more efficient chip designs and a reliance on renewable or low-impact energy sources.

Chart showing global data center electricity use rising from 333 TWh in 2022 to 1,065 TWh in 2030, driven by AI demand.
Source: Deloitte analysis based on publicly available information sources and conversations with industry experts.

According to Deloitte, to power these data centres and reduce the environmental impact, 

"Many companies are looking to use a combination of innovative and energy-efficient data centre technologies and more carbon-free energy sources."

AI-Powered Cyber Defence Is Giving Organisations a Faster Way to Respond

While AI introduces risk, it’s also becoming one of the most powerful tools in cyber defence:

  • Threat detection and response: AI can scan millions of data points in real-time to detect anomalies, helping prevent phishing, ransomware, and network intrusions before damage occurs.
  • Behavioural analysis: Algorithms can map employee or user behaviours to flag abnormal patterns—reducing insider threat risks.
  • Incident response automation: AI agents can isolate compromised systems and contain breaches automatically, often before human teams are alerted.
  • Predictive analytics: Machine learning models help forecast future attack vectors based on historical threat intelligence.
“Defensive AI agents can now collaborate to contain complex cyberattacks as they unfold,” said a cyber defence researcher. “This allows organisations to respond faster and more effectively.”

A growing number of companies are driving this shift. Darktrace uses self-learning AI to identify suspicious behaviour across networks. CrowdStrike’s Falcon platform provides real-time endpoint protection. Microsoft integrates AI threat detection into its Defender suite, while Google leverages DeepMind technology in its cloud security systems.

Enterprise adoption is spreading quickly. Banks like HSBC and Barclays are applying AI in fraud prevention and anti-money laundering. In Australia, Telstra and Woolworths Group have deployed AI to detect anomalies across their internal systems.

However, as AI makes development faster through tools like CursorAI and Replit, the same tools are being exploited by bad actors to generate malware. The barrier to launching a cyberattack is lower than ever, creating urgency for developers and cybersecurity firms to collaborate on secure AI environments.

Multi-agent systems—where coordinated AI units operate independently across digital infrastructure—are gaining traction. These systems can detect, communicate, and react in real time, offering organisations faster and more adaptive defence capabilities.

As AI governance standards solidify, industry-specific assurance frameworks will be critical to validate AI systems' safety, fairness, and reliability.

In 2025, businesses face a defining challenge: harnessing the productivity of AI while protecting operations, resources, and trust from its unintended consequences.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.