Trump's $8.9bn Intel investment has surged 57% following Nvidia's $5bn partnership deal, creating hybrid x86 RTX consumer chips. The unprecedented government equity stakes in both tech giants raise fundamental questions about capitalism's future in America's technology sector.
Australia's Privacy Commissioner has ruled Kmart's use of facial recognition technology unlawful, finding the retailer breached customer privacy by collecting biometric data without consent.
Cyber incidents linked to third-party suppliers used by the New South Wales government have more than quadrupled in two years, revealing significant vulnerabilities in the state's digital supply chain. The surge highlights the growing threat of supply chain attacks to government services and data.
2024 in Review: The Year Artificial Intelligence Met Cyber Chaos (Part 1)
As 2025 begins, 2024’s AI breakthroughs stand out, but so do the cyber threats that accompanied them. From AI-powered phishing to deepfakes and cloud breaches, the year highlighted the delicate balance between innovation and security risks.
As we step into 2025, the technological advancements of 2024 remain fresh in our minds. Last year will be remembered for its extraordinary leaps in artificial intelligence (AI), which reshaped industries and society at an unprecedented pace. However, it was also a year defined by a rising tide of cyber threats. From deepfakes to sophisticated AI-driven malware and phishing attacks, 2024 highlighted the fine line between innovation and risk.
The Rise of Generative AI: Innovation Meets Weaponization
In 2024, the explosive growth of generative AI technology proved both transformative and perilous. While platforms like ChatGPT and MidJourney expanded creative possibilities and problem-solving capabilities, they simultaneously opened the door for cybercriminals to exploit these tools for malicious purposes. AI-driven cyberattacks surged, with platforms such as WormGPT and FraudGPT automating phishing schemes and malware creation, significantly increasing the scale and sophistication of online threats.
One of the most concerning developments this year was the weaponization of deepfake technology. AI-generated videos, audio clips, and images were used in disinformation campaigns, financial fraud, and even corporate extortion. The ability to create highly convincing but entirely fabricated content raised alarms about the erosion of trust in digital media. High-profile incidents, such as deepfake impersonations of political figures and business leaders, highlighted the devastating potential of this technology to deceive and manipulate public opinion.
AI’s role in spreading misinformation took a particularly insidious turn in the realm of politics. In the lead-up to elections, AI-generated deepfakes and synthetic content flooded social media, making it increasingly difficult to discern fact from fiction. Notable examples, including manipulated videos of U.S. political figures, demonstrated how easily public perception could be swayed by these tools. As the year progressed, the weaponization of AI in this manner became a significant concern, underscoring the urgent need for better detection systems and regulations to mitigate its impact on society.
Cloud Security in the Crosshairs
The shift to cloud computing continued unabated, but with it came an alarming 75% increase in cloud breaches in 2024. Attackers leveraged AI to exploit vulnerabilities in supply chains and manipulate software dependency chains, creating a new class of attacks such as "Package Illusion." These sophisticated intrusions bypassed traditional defenses, underscoring the need for a paradigm shift in cybersecurity.
Microsoft’s Digital Defense Report 2024 highlighted the potential for generative AI to wreak havoc in cloud environments as the technology matures. While AI-generated attacks remain relatively low, the future promises more advanced tactics that will require proactive defense strategies.
AI-Powered Social Engineering: A Cunning Evolution in Cybercrime
In the unfolding narrative of modern cyber threats, social engineering has emerged as the ultimate confidence trick, cleverly recast for the digital age. Leveraging artificial intelligence tools capable of mimicking the subtle quirks of human communication, cybercriminals now deliver phishing lures that feel startlingly genuine. The result? Messages so finely tuned to their targets’ interests, fears, and routines that even the most vigilant recipients may be coaxed into betraying closely guarded secrets.
This narrative took an especially ominous turn as ransomware assaults skyrocketed. The rise of Ransomware-as-a-Service (RaaS) allowed relative newcomers to unleash attacks once reserved for seasoned criminal syndicates. In August 2024, CNC reported a chilling twist in a notorious storyline: the group behind last year’s ransomware siege on Dallas has rebranded itself as “BlackSuit,” leaving its past identity as “Royal” in the shadows. Today, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) confirm that these reinvented villains are now demanding over $500 million in ransom from terrified targets.
A freshly updated federal advisory lays out the sordid details behind BlackSuit’s tactics, helping defenders anticipate their every move. Investigators note ransom demands once soared to $60 million, and the group’s shift in identity was spotted as early as November. With fresh attacks under this new banner, the lesson is clear: the rules of engagement have changed.
“We’re witnessing a cybercriminal renaissance,” said CISA Director Jen Easterly. “Because of ransomware attacks, people are waking up to the idea of ‘what do I need to do to protect my family and my community?"
At the heart of this transformed landscape lies artificial intelligence. From the initial reconnaissance to the final act of exploitation, AI injects criminal campaigns with a ruthless efficiency that was once unimaginable.
Global Response: A Mixed Bag
Governments and corporations scrambled to respond to these growing threats. Nations like the United States and members of the European Union strengthened regulatory frameworks for AI, while Australia and Japan prioritized international cooperation to tackle cross-border cybercrime.
Tech giants like Microsoft and Google doubled down on AI-driven cybersecurity measures. Microsoft emphasized integrating AI into defense strategies, focusing on mitigating ransomware, identity theft, and fraud. Google, on the other hand, invested in predictive analytics to identify potential threats before they materialized. Despite these efforts, the sheer pace of AI-driven cyberattacks often outstripped defense mechanisms, revealing critical gaps in global preparedness.
Trump's $8.9bn Intel investment has surged 57% following Nvidia's $5bn partnership deal, creating hybrid x86 RTX consumer chips. The unprecedented government equity stakes in both tech giants raise fundamental questions about capitalism's future in America's technology sector.
Microsoft 365 remains healthcare’s weakest security link, with breaches rising from 43% in 2024 to 52% in mid-2025. Patient data exposure, soaring costs, and AI-driven cyberattacks in Australia highlight urgent gaps. Policymakers face mounting pressure to safeguard data sovereignty.
Artificial Intelligence has become the new battleground of global politics. Washington and Beijing pursue Dual-Carriage Politics, blending economic ambition, military strategy, and social values. From classrooms to trade wars, AI now shapes power, society, and the fragile balance of global order.
Australian businesses face a 1,265% surge in AI-powered phishing attacks. Cybercriminals use deepfakes, adaptive malware, and hyper-personalised scams while employees inadvertently leak data through AI tools. Expert insights reveal dual threats and essential protection strategies.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!