ShinyHunters has exposed a critical weakness in cloud systems. The McGraw Hill breach shows how misconfigured Salesforce portals enabled large scale data leaks, with no software flaw to fix. This marks a shift toward exploiting common operational gaps rather than rare vulnerabilities.
Anthropic’s Mythos clampdown, April’s record Patch Tuesday and Nvidia’s Blackwell‑to‑Rubin GPU roadmap mark a turning point in cyber defence, exposing how deeply allied nations now rely on US‑controlled, agentic AI to detect and counter zero‑day threats.
Booking.com confirms hackers accessed customer names, emails, addresses, and booking details via third-party compromise. Stolen data is already fuelling targeted WhatsApp phishing attacks, exposing deep supply chain vulnerabilities in global travel platforms.
Navigating the Uncertainties of Advanced AI Development
AGI's path is unclear, unlike past engineering projects. OpenAI's leadership changes reveal internal debates. As AI development spreads, concerns rise over concentrated power, prompting questions about governance and oversight.
The pursuit of advanced artificial intelligence (AI), specifically Artificial General Intelligence (AGI), embodies a blend of abstract concepts and real-world applications. Unlike concrete engineering feats like the Apollo Program or the Hoover Dam, AGI's development path remains enigmatic, posing unique challenges to both developers and policymakers.
In the realm of AI, particularly with AGI, we face a unique challenge: it exists more as an abstract notion than a defined entity.
This vagueness contrasts starkly with historical engineering milestones, such as the Apollo Program, where objectives and capabilities were clear-cut.
Image: Saturn V Rocket Model Used In Apollo Program
The distance to the moon and the rocket's thrust were known, but with AGI, there's no definitive measure of our proximity to this goal, nor a clear understanding of the potential of OpenAI's language models in achieving it.
Recent actions, like the White House's executive order on AI, reflect the confusion surrounding open-source AI models. Some perceive OpenAI as lobbying for regulatory restrictions on its competitors.
While concerns about AGI being simultaneously imminent and perilous might be genuine, they fuel a paradoxical race to both develop and regulate it.
This was evident at OpenAI, where differing factions – one advocating for cautious progress, the other for accelerated development – clashed over the organisation's direction.
Contrasting AGI with landmark engineering projects like the Hoover Dam, which epitomised American industrial prowess, underscores the enigmatic essence of AGI.
The Hoover Dam, conceived in 1922 and authorised in 1930, with construction beginning in 1932, had explicit, measurable objectives, such as mitigating irrigation risks across seven states. This comparison accentuates the elusive and abstract nature of AGI.
Image: Hoover Dam
What implications does this have for our grasp of AGI and its possible development path? Might AI progress as swiftly as the evolution from early aeroplanes to spacecraft, or might it chart a distinct course? Such uncertainties often turn the discourse on AI risks into a realm of metaphorical analogies and philosophical contemplation.
Without clear benchmarks, how do we approach the unknowns of AI development?
Image: Taken by Mojahid Mottakin
The recent tumult at OpenAI, marked by leadership changes and internal debates about its direction and governance, brings to the fore a critical question about the future of AI and its governance.
This situation highlights the intricate dance between ethical oversight and commercial goals within the AI industry. As OpenAI grapples with these issues, its relationship with Microsoft, a major investor and partner, plays a pivotal role in determining the path AI technology will take, with far-reaching implications for society.
Simultaneously, this unrest within OpenAI has inadvertently spurred a rapid evolution in the AI field. Companies that relied on OpenAI's technologies are now exploring alternatives, leading to a diversification and acceleration in AI development.
This shift challenges the notion that a few pioneering technologies or brilliant minds can singularly dictate the trajectory of AI. Instead, it suggests a more decentralised and multifaceted future for AI innovation.
However, this scenario raises a significant concern: With the increasing influence of a handful of corporations and individuals in shaping AI's future, are we overlooking potential risks?
The concentration of power and decision-making in the hands of a few in the AI sector, particularly in influential companies like OpenAI, poses a question of caution. Is it prudent to allow such a nascent and powerful technology to be predominantly influenced by corporate sector interests? Are there alternative approaches to AI development and governance that might better serve the broader interests of society?
Anthropic’s rapid push into enterprise AI and its $30B raise signal a new phase where autonomous systems drive both productivity and cyber risk. As AI executes tasks at machine speed, markets, governments and workers face a sharper question: who controls the systems now shaping outcomes.
Anthropic’s warnings and real-world AI-driven cyber campaigns mark a decisive shift. Autonomous systems are compressing attack timelines to machine speed, forcing markets and governments to confront a new reality where cyber risk is continuous, scalable and no longer human-bound.
January 2026 reveals AI’s true battleground: not just code, but power, chips, and physical infrastructure. From TSMC and ASML shaping compute supply to robots, exoskeletons, and soaring energy demand, the race for intelligence now spans factories, grids, and even orbit above and below too now
By 2027 the race to become the first cosmic CEO is moving from science fiction to strategy. Starcloud has already trained an AI model in orbit on an Nvidia H100, while Google prepares Project Suncatcher. What remains missing is not ambition, but clear pricing and proof orbital compute can pay.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!