New York-based Aily Labs has secured $80 million in funding to scale its AI-native decision intelligence platform, which uses autonomous agents to help Fortune 500 companies accelerate performance. The investment was led by FPV Ventures, with participation from Insight Partners and J.P. Morgan.
Cybersecurity vendor SonicWall has confirmed a state-sponsored threat actor breached its systems by exploiting an API call, exposing the firewall configuration files of every customer who used its MySonicWall cloud backup service.
Japanese media giant Nikkei Inc. has disclosed a data breach affecting over 17,000 individuals after attackers infiltrated its Slack workspace using credentials stolen via infostealer malware on an employee’s personal computer, exposing names, emails, and chat histories.
2024 in Review: The Year Artificial Intelligence Met Cyber Chaos (Part 1)
As 2025 begins, 2024’s AI breakthroughs stand out, but so do the cyber threats that accompanied them. From AI-powered phishing to deepfakes and cloud breaches, the year highlighted the delicate balance between innovation and security risks.
As we step into 2025, the technological advancements of 2024 remain fresh in our minds. Last year will be remembered for its extraordinary leaps in artificial intelligence (AI), which reshaped industries and society at an unprecedented pace. However, it was also a year defined by a rising tide of cyber threats. From deepfakes to sophisticated AI-driven malware and phishing attacks, 2024 highlighted the fine line between innovation and risk.
The Rise of Generative AI: Innovation Meets Weaponization
In 2024, the explosive growth of generative AI technology proved both transformative and perilous. While platforms like ChatGPT and MidJourney expanded creative possibilities and problem-solving capabilities, they simultaneously opened the door for cybercriminals to exploit these tools for malicious purposes. AI-driven cyberattacks surged, with platforms such as WormGPT and FraudGPT automating phishing schemes and malware creation, significantly increasing the scale and sophistication of online threats.
One of the most concerning developments this year was the weaponization of deepfake technology. AI-generated videos, audio clips, and images were used in disinformation campaigns, financial fraud, and even corporate extortion. The ability to create highly convincing but entirely fabricated content raised alarms about the erosion of trust in digital media. High-profile incidents, such as deepfake impersonations of political figures and business leaders, highlighted the devastating potential of this technology to deceive and manipulate public opinion.
AI’s role in spreading misinformation took a particularly insidious turn in the realm of politics. In the lead-up to elections, AI-generated deepfakes and synthetic content flooded social media, making it increasingly difficult to discern fact from fiction. Notable examples, including manipulated videos of U.S. political figures, demonstrated how easily public perception could be swayed by these tools. As the year progressed, the weaponization of AI in this manner became a significant concern, underscoring the urgent need for better detection systems and regulations to mitigate its impact on society.
Cloud Security in the Crosshairs
The shift to cloud computing continued unabated, but with it came an alarming 75% increase in cloud breaches in 2024. Attackers leveraged AI to exploit vulnerabilities in supply chains and manipulate software dependency chains, creating a new class of attacks such as "Package Illusion." These sophisticated intrusions bypassed traditional defenses, underscoring the need for a paradigm shift in cybersecurity.
Microsoft’s Digital Defense Report 2024 highlighted the potential for generative AI to wreak havoc in cloud environments as the technology matures. While AI-generated attacks remain relatively low, the future promises more advanced tactics that will require proactive defense strategies.
AI-Powered Social Engineering: A Cunning Evolution in Cybercrime
In the unfolding narrative of modern cyber threats, social engineering has emerged as the ultimate confidence trick, cleverly recast for the digital age. Leveraging artificial intelligence tools capable of mimicking the subtle quirks of human communication, cybercriminals now deliver phishing lures that feel startlingly genuine. The result? Messages so finely tuned to their targets’ interests, fears, and routines that even the most vigilant recipients may be coaxed into betraying closely guarded secrets.
This narrative took an especially ominous turn as ransomware assaults skyrocketed. The rise of Ransomware-as-a-Service (RaaS) allowed relative newcomers to unleash attacks once reserved for seasoned criminal syndicates. In August 2024, CNC reported a chilling twist in a notorious storyline: the group behind last year’s ransomware siege on Dallas has rebranded itself as “BlackSuit,” leaving its past identity as “Royal” in the shadows. Today, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) confirm that these reinvented villains are now demanding over $500 million in ransom from terrified targets.
A freshly updated federal advisory lays out the sordid details behind BlackSuit’s tactics, helping defenders anticipate their every move. Investigators note ransom demands once soared to $60 million, and the group’s shift in identity was spotted as early as November. With fresh attacks under this new banner, the lesson is clear: the rules of engagement have changed.
“We’re witnessing a cybercriminal renaissance,” said CISA Director Jen Easterly. “Because of ransomware attacks, people are waking up to the idea of ‘what do I need to do to protect my family and my community?"
At the heart of this transformed landscape lies artificial intelligence. From the initial reconnaissance to the final act of exploitation, AI injects criminal campaigns with a ruthless efficiency that was once unimaginable.
Global Response: A Mixed Bag
Governments and corporations scrambled to respond to these growing threats. Nations like the United States and members of the European Union strengthened regulatory frameworks for AI, while Australia and Japan prioritized international cooperation to tackle cross-border cybercrime.
Tech giants like Microsoft and Google doubled down on AI-driven cybersecurity measures. Microsoft emphasized integrating AI into defense strategies, focusing on mitigating ransomware, identity theft, and fraud. Google, on the other hand, invested in predictive analytics to identify potential threats before they materialized. Despite these efforts, the sheer pace of AI-driven cyberattacks often outstripped defense mechanisms, revealing critical gaps in global preparedness.
Deeto has raised $12.5M to scale its AI-powered platform that turns customer stories into targeted proof across sales and marketing channels. As traditional tactics lose impact, Deeto helps teams earn trust earlier, influence decisions, and speed up the sales cycle.
Australian data centre leader AirTrunk, backed by Blackstone, has struck a US$3 billion deal with Saudi Arabia’s HUMAIN, aligning with the Trump administration’s push for Western AI dominance. The partnership cements the Gulf as the new frontier for AI infrastructure and geopolitical tech power.
Kmart’s facial recognition breach exposes more than a privacy violation. This extended analysis unpacks Wesfarmers’ compliance failures, the identity risks of biometric data, and how retail surveillance linking with social media could erode consumer trust.
Trump's $8.9bn Intel investment has surged 57% following Nvidia's $5bn partnership deal, creating hybrid x86 RTX consumer chips. The unprecedented government equity stakes in both tech giants raise fundamental questions about capitalism's future in America's technology sector.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!