Australia Grapples with AI-Enhanced Phishing and Data Exposure Risks

Australian businesses face a 1,265% surge in AI-powered phishing attacks. Cybercriminals use deepfakes, adaptive malware, and hyper-personalised scams while employees inadvertently leak data through AI tools. Expert insights reveal dual threats and essential protection strategies.

Australia Grapples with AI-Enhanced Phishing and Data Exposure Risks
audio-thumbnail
Today’s Cyber Update
0:00
/165.093878

Australian businesses are on high alert as a new breed of cyber threat emerges, powered by artificial intelligence. Cybercriminals are now weaponising AI to create highly sophisticated and convincing phishing attacks, leaving many organisations scrambling to defend themselves against this invisible enemy.

Recent statistics paint a grim picture. According to IBM’s 2025 Cost of a Data Breach Report, AI-generated phishing has become the single most common AI-driven attack, accounting for 37% of breaches where attackers used AI

Meanwhile, new research from Netskope Threat Labs shows that 121 out of every 10,000 Australian employees clicked on phishing links each month over the past year, a 140% increase from the previous period. With 63% of Australian organisations embracing generative AI, and Netskope finding that 87% of companies now have employees accessing AI applications monthly, the digital doors are wide open for attackers to exploit this powerful technology.

How AI Supercharges Phishing Attacks

Today's AI-powered phishing attacks are personalised, context-aware, and incredibly difficult to spot. Here's how they work:

  • Hyper-Personalisation: AI algorithms scrape the web for information about target companies and employees. From LinkedIn profiles to social media posts, AI gathers data to craft highly personalised emails. These messages might reference recent projects, colleagues' names, or personal interests, making them appear legitimate.
  • Voice and Video Impersonation: Deepfake technology allows attackers to clone a CEO's voice or create realistic videos for online meetings. They can then instruct employees to make urgent payments or transfer sensitive data, with the victim believing they're speaking to their boss.
  • Adaptive Malware: AI creates "polymorphic" malware that changes its own code to evade detection. This malware can analyse a company's network, identify vulnerabilities, and launch targeted attacks without human intervention.

The Dual Threat: External Attacks and Internal Data Exposure

While businesses focus on external AI threats, Netskope's research reveals an equally concerning internal risk. The financial consequences of both attack vectors are severe, with the average cost of a data breach for Australian organisations reaching $4.26 million—a 27% jump since 2020. For small businesses, a single cybercrime incident costs an average of $49,600.

Australian employees are inadvertently exposing sensitive data through AI applications, with intellectual property (42%), source code (31%), and regulated data (20%) being the most frequently leaked information types. This internal vulnerability is compounded by 55% of workers using personal AI accounts for work purposes, making monitoring and protection efforts significantly more challenging.

The data exposure risk extends well beyond dedicated AI tools alone. Employees routinely copy sensitive corporate data into personal cloud applications, then often feed that same information into AI tools for analysis or assistance. This creates a dangerous data exposure chain that attackers can exploit using AI to scrape these platforms for intelligence gathering.

Research reveals just how widespread this vulnerability is across Australian workplaces. Personal cloud applications are ubiquitous, with LinkedIn and Microsoft OneDrive present in 95% of monitored environments, followed by Google Drive (94%), Facebook (93%), and ChatGPT (85%). Each of these platforms represents a potential data leakage point that can amplify the effectiveness of AI-powered attacks.

Figure: Top personal cloud apps used in Australian workplaces (Source: Netskope Threat Labs)

Ray Canzanese, Director of Netskope Threat Labs, warns: 

"The general availability of AI tools continues to enable threat actors to refine their social engineering techniques, and sophisticated phishing campaigns and convincing voice or video deepfakes are now regularly reported as the source of high profile data breaches. However, deliberate data theft is only part of the picture. Our data shows that the use of AI in the workplace is also a major risk vector for accidental data loss."

Protecting Your Business

Australian businesses need a multi-layered defence strategy:

  • Invest in AI-Powered Defence: Implement security solutions that use AI and machine learning to detect and respond to threats. These tools can analyse network traffic, identify suspicious patterns, and neutralise attacks before they cause damage.
  • Educate Your Team: Provide regular training on spotting sophisticated phishing emails, deepfake scams, and social engineering tactics. Simulated phishing exercises can test and reinforce this training.
  • Embrace Zero Trust: Implement strict access controls and require continuous authentication for all users and devices. The old model of a secure perimeter is no longer effective.
  • Stay Informed: Subscribe to real-time threat intelligence feeds to stay ahead of emerging threats and proactively update your defences.
  • Govern AI Usage: Implement company-approved AI solutions to combat "shadow AI" where employees use unauthorised tools. Deploy data loss prevention specifically for AI applications and establish clear policies about what information can be shared with AI systems.

The Australian government is stepping up efforts through the Australian Cyber Security Strategy 2023-2030 and the new Scams Prevention Framework (SPF) Bill, but businesses must take responsibility for their own security.

As Reuben Koh from Akamai Technologies puts it, 

"Scams are not just an IT issue; they're a people, process, and ecosystem issue. Protecting Australians requires vigilance, technology, and collaboration across businesses, government, and consumers."

The reality for businesses today is that one convincing email or fake voice has the power to cause major disruption. Protecting people and customers comes down to awareness, strong systems and clear plans. It is not just about technology, it is about keeping operations steady and trust intact.


Get the stories that matter to you.
Subscribe to Cyber News Centre and update your preferences to follow our Daily 4min Cyber Update, Innovative AI Startups, The AI Diplomat series, or the main Cyber News Centre newsletter — featuring in-depth analysis on major cyber incidents, tech breakthroughs, global policy, and AI developments.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Cyber News Centre.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.