Australia’s DeWave is redefining brain-computer interfaces with an AI-powered EEG cap that decodes thoughts without surgery. While Neuralink drills into skulls, DeWave shows non-invasive tech can deliver real impact—raising big questions about access and the future of thought control.
AI-driven humanoids have turned factory floors into geopolitical battlegrounds. China is turbo-charging automation and redrawing alliances, while the U.S. scrambles to close the gap—placing the next era of diplomacy, defense, and economic power squarely in the decisive hands of intelligent machines.
Apple’s new research paper dismantles the myth of AI reasoning, revealing that models from OpenAI, Anthropic, and Google collapse under complex tasks. Released ahead of WWDC 2025, the findings challenge billion-dollar AGI claims and expose the industry’s most persuasive illusion.
Cyber Breaches and AI Deepfakes: How the 2024 US Elections Are Vulnerable to Misinformation
Chinese hackers allegedly breached U.S. telecoms tied to Harris and Trump campaigns, highlighting election security gaps. AI-driven deepfakes and disinformation also surge on social media, raising risks to democracy as voters near Election Day.
Chinese Hackers Allegedly Breach U.S. Telecoms, Raising Election Security Concerns
Recent reports of alleged Chinese government-sponsored hackers infiltrating key U.S. telecom networks connected to both Kamala Harris’s and Donald Trump’s campaign communications signal a new level of threat to democratic systems. A group identified as Salt Typhoon allegedly breached the systems of major telecommunications companies, including AT&T, Verizon, and Lumen, potentially exposing sensitive campaign communications and wiretap operations. The FBI and Cybersecurity and Infrastructure Security Agency (CISA) swiftly responded, stating,
“After the FBI identified specific malicious activity targeting the sector, the FBI and [CISA] immediately notified affected companies, rendered technical assistance, and rapidly shared information to assist other potential victims.”
As the probe intensifies, these breaches highlight the vulnerability of electoral infrastructure and the growing risks that foreign interference poses to democratic nations.
Michael Kaiser, president and CEO of Defending Digital Campaigns (DDC), emphasized the stakes involved, stating,
“Our personal devices are prime targets because they have the potential to reveal so much about us—including who we speak to, our travel and meeting plans, communications with key staffers and family members, and more.”
With Election Day fast approaching, candidates and their teams face an unprecedented onslaught of cyber threats that jeopardize not only their privacy but also the broader democratic process.
AI Deepfakes and Disinformation as New Tools of Election Interference
In addition to traditional hacking, the U.S. election season is contending with a deluge of artificial intelligence (AI)-generated disinformation. AI-powered deepfake videos, often portraying manipulated or entirely fabricated scenarios, have become powerful weapons in the arsenals of foreign entities.
Examples include an expletive-laden deepfake video of Joe Biden, a doctored image of Donald Trump being forcibly arrested, and a fabricated video of Kamala Harris casting doubt on her own competence—each designed to confuse and mislead the electorate. “These recent examples are highly representative of how deepfakes will be used in politics going forward,” said Lucas Hansen, co-founder of the nonprofit CivAI.
“While AI-powered disinformation is certainly a concern, the most likely applications will be manufactured images and videos intended to provoke anger and worsen partisan tension.”
Arizona’s Secretary of State Adrian Fontes echoed concerns about AI’s role in voter manipulation, noting that
“generative artificial intelligence and the ways that those might be used”
represent a significant challenge for election officials. Arizona, among other battleground states, has taken proactive steps to prepare for such scenarios, with officials conducting tabletop exercises that envision Election Day disruptions fueled by AI-generated deepfakes.
The threat goes beyond mere confusion. In a politically polarized environment, these AI-powered manipulations are purpose-built to inflame existing divides, potentially influencing voter turnout. As Hansen demonstrated to reporters, an AI chatbot could be easily programmed to disseminate false claims about polling locations or election times, subtly steering certain voter groups away from the polls. This flood of deceptive content, often launched through anonymous social media accounts, adds yet another layer to the foreign interference puzzle.
The Multiplying Effect of AI on Social Media Platforms
The reach of AI disinformation is dramatically amplified by social media platforms, where algorithms can prioritize content based on engagement. This creates an environment ripe for exploitation by foreign entities, who seek to weaponize these algorithms to maximize the spread of false information. Russia and North Korea are reportedly leveraging social media as key vehicles for their disinformation campaigns, particularly targeting issues that polarize U.S. voters, such as immigration, healthcare, and race relations.
The multiplier effect created by AI and social media algorithms is of growing concern. In one example, Elon Musk shared a deepfake video of Kamala Harris to his 192 million followers on X (formerly Twitter), where Harris, in a fabricated voiceover, calls President Biden “senile” and expresses doubt about her ability to govern.
The video, devoid of any disclaimer, spread quickly, stoking anger and confusion. It was only after a backlash that Musk clarified the video was intended as satire. This incident demonstrates the extraordinary influence that high-profile individuals can have in spreading manipulated content, as well as the need for clearer guidelines to flag AI-altered media.
Source: X (Formerly Twitter)
The role of AI in disinformation is a complex problem that requires active management by tech companies. However, with many social media platforms scaling back on content moderation, there are serious concerns that these AI engines will continue to spread misinformation at a massive scale, ultimately influencing voter behaviour and fueling distrust in the electoral system.
Outlook: The Persistent Shadow of Cyber Threats to Democracy
As the November 5 election approaches, the stakes could not be higher. This moment could either mark the beginning of a new era of pervasive AI-driven disinformation, fake news, and fragmented democratic processes, or the election's aftermath could catalyze an even greater wave of AI-fueled misinformation campaigns and social media bot activity.
Will this be the new norm—a generation of amplified falsehoods and manipulated realities threatening the core of democracy? Or will the results of this election push these digital threats to unprecedented levels, leaving Western democracies to grapple with the fallout? The world waits, bracing for an answer.
Asia-Pacific faced over one-third of all cyberattacks in 2024, making it the world’s top target. From manufacturing breaches to talent shortages and rising ransomware, CNC investigates how a region of digital ambition became cybercrime’s global epicentre.
On May 30, 2025, Australia became the first nation to criminalize secret ransomware payments. Under the new Cyber Security Act, large organizations must report such incidents within 72 hours—marking a major step in the country’s quest to become a global cybersecurity leader by 2030.
Jensen Huang spearheaded Trump’s assertive AI strategy, driving Nvidia’s profits up 69% despite intense US-China tensions. Together with Elon Musk, Huang orchestrated landmark Gulf deals, embedding American tech globally, boosting Silicon Valley dominance, and sidelining China's AI ambitions.
AI is reshaping Western defense, but with progress comes risk. Australia stands at a crossroads: lead in securing AI-driven military tech or risk importing vulnerabilities. As global powers weaponize algorithms, oversight, cooperation, and resilience are now mission-critical.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!