Tech elites like Karp, Amodei, and Cannon-Brookes aren’t just building companies—they’re reshaping policy, energy, defense, and the global balance of power. As democratic institutions lag behind, a new techno-aristocracy is emerging to define the future of governance.
Sam Altman says humanity has crossed the AI event horizon. As artificial intelligence grows into a cognitive force, tasks once seen as miraculous are becoming routine. The world must now navigate ethical risks, job disruption, and a future shaped by superintelligence.
Amazon’s AI memo triggered a global reckoning as white collar jobs begin to vanish. Companies are embracing automation to boost efficiency while governments warn of a social fallout. The future depends on whether leaders invest equally in technology and the people it is set to replace.
From Telegram's Data Sharing to AI-Driven Election Interference: Unveiling Cyber Threats on Social Platforms
Cybercriminals and state-sponsored actors exploit social media for espionage and disinformation. Telegram is under fire for sharing data with Russia’s FSB, prompting Ukraine to restrict it. OpenAI's Ben Nimmo fights AI-driven disinformation targeting U.S. and European elections.
CNC Cyber Pulse: Social Media Exploitation and AI-Driven Disinformation in Global Statecraft
In today's digital landscape, social media platforms have become pivotal arenas for both syndicated cybercriminals and state-sponsored espionage activities. Two recent developments underscore how these platforms are being leveraged to influence public opinion, disrupt democratic processes, and conduct covert operations. This analysis delves into the multifaceted role of social media in facilitating cybercrime and statecraft, as well as the emerging impact of artificial intelligence in these domains.
Telegram Under Fire: Russian Access Confirmed, Ukraine Responds with Platform Restrictions
Previously, CNC reported on significant shifts within Telegram following legal pressures faced by the platform's leadership. On September 23, 2024, Telegram announced a policy change to comply with valid legal requests, agreeing to share user IP addresses and phone numbers to enhance moderation and cooperation with authorities. This shift includes deploying an AI-supported moderation team and launching a bot for reporting illegal content, marking a substantial change in the platform's approach to user privacy and content management.
However, this new stance highlights Telegram's complex history with data sharing, especially in relation to Russia. Reports confirm that since 2018, Telegram has provided Russia’s Federal Security Service (FSB) with access to user data—a level of cooperation denied to Western authorities. Ukraine’s National Coordination Centre for Cybersecurity recently limited Telegram use in defence sectors, citing Russian intelligence exploitation. Former NSA Director Rob Joyce noted,
“The idea that he [Durov] could come and go while defying Russia is inconceivable,”
emphasising the geopolitical nuances of Telegram’s data-sharing practices.
The fallout has been swift in underground circles, where discussions on forums such as Exploit and Cracked reveal a strong push for alternative platforms. Nearly every major forum, from Exploit to Cracked, has opened threads to discuss migration options, with many users advocating for platforms such as Jabber, Tox, Matrix, Signal, and Session.
Source: YouTube, CNN, Ben Nimmo
AI and Election Security: OpenAI’s Ben Nimmo Leads the Fight Against Foreign Disinformation
An Editorial Extract and Review by CNC on Ben Nimmo—"This Threat Hunter Chases U.S. Foes Exploiting AI to Sway the Election"
As the United States approaches the 2024 presidential election, the intersection of artificial intelligence and election security has become increasingly critical. Ben Nimmo, the principal threat investigator at OpenAI, is at the forefront of efforts to counter foreign disinformation campaigns that leverage AI technologies. According to a report issued by The Washington Post, Nimmo has discovered that nations such as Russia, China, and Iran are experimenting with tools like ChatGPT to generate targeted social media content aimed at influencing American political opinion.
In a significant June briefing with national security officials, Nimmo's findings were so impactful that they meticulously highlighted and annotated key sections of his report. This reaction underscores the growing urgency surrounding AI-driven disinformation and its potential impact on democratic processes. While Nimmo characterises the current attempts by foreign adversaries as "amateurish and bumbling," there is a palpable concern that these actors may soon refine their tactics and expand operations to more effectively disseminate divisive rhetoric using AI.
A notable example from Nimmo's recent investigations involves an Iranian operation designed to increase polarisation within the United States. The campaign distributed long-form articles and social media posts on sensitive topics such as the Gaza conflict and U.S. policies toward Israel, aiming to manipulate public discourse and exacerbate societal divisions.
Nimmo's work has gained particular significance as other major tech companies reduce their efforts to combat disinformation. His contributions are viewed by colleagues and national security experts as essential resources, especially in the absence of broader industry initiatives. However, some peers express caution regarding potential corporate influences on the transparency of these disclosures. Darren Linvill, a professor at Clemson University, remarked that Nimmo
"has certain incentives to downplay the impact,"
suggesting that OpenAI's business interests might affect the extent of information shared.
Despite these concerns, OpenAI and Nimmo remain steadfast in their mission. Nimmo continues to focus on detecting and neutralising disinformation campaigns before they gain momentum, aiming to safeguard the integrity of the electoral process from foreign interference amplified by artificial intelligence. His efforts highlight the critical role of vigilant monitoring and proactive intervention in protecting democratic institutions in the age of AI-driven misinformation.
Asia-Pacific faced over one-third of all cyberattacks in 2024, making it the world’s top target. From manufacturing breaches to talent shortages and rising ransomware, CNC investigates how a region of digital ambition became cybercrime’s global epicentre.
On May 30, 2025, Australia became the first nation to criminalize secret ransomware payments. Under the new Cyber Security Act, large organizations must report such incidents within 72 hours—marking a major step in the country’s quest to become a global cybersecurity leader by 2030.
AI is fueling a new wave of cyber threats—but it's also powering the tools to stop them. From privacy concerns and energy strain to predictive security and autonomous defence, this article explores how businesses are adapting to the dual impact of AI in 2025.
Australia’s Big Four banks including Commonwealth Bank, ANZ, NAB and Westpac have been hit by a major cybercrime wave. Over 31,000 customer credentials were stolen using infostealer malware, prompting urgent upgrades in bank security, fraud detection and digital protection.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!