Perth-based operational technology firm Intellect Systems has been targeted by the Akira ransomware group, which claims to have stolen 10GB of sensitive corporate and personal data. The attack highlights the growing threat to the critical infrastructure sector through vulnerable network devices.
US casino operator Boyd Gaming has confirmed a cyberattack that resulted in the theft of employee data and information from a limited number of other individuals. The company is investigating the incident and has notified relevant authorities.
Instagram has launched an AI-driven age verification tool in Australia ahead of the December 10 ban on under-16s using social media. The move aims to boost child safety but raises major privacy concerns, with experts warning of risks tied to surveillance, data misuse and unreliable accuracy.
Instagram’s AI Age Verification Sparks Privacy Debate
Instagram has launched an AI-driven age verification tool in Australia ahead of the December 10 ban on under-16s using social media. The move aims to boost child safety but raises major privacy concerns, with experts warning of risks tied to surveillance, data misuse and unreliable accuracy.
As Australia prepares for its groundbreaking ban on social media for under-16s, Instagram has quietly rolled out a new tool to head off underage users: artificial intelligence. The company confirmed its AI-based age verification system is now active in Australia, a move greeted with a mix of optimism and unease.
This update comes ahead of the December 10 deadline for platforms to comply with the federal government’s new law, covered previously by Cyber News Centre. The legislation, the first of its kind worldwide, will expose companies like Meta (Instagram’s parent company), TikTok and Snapchat to fines of up to AUD $50 million if they fail to keep children under 16 off their platforms.
Instagram’s system, already in use in the United States and expanding to Canada and the United Kingdom, uses behavioural patterns and other signals to estimate a user’s age. Suspected teens are pushed into stricter “Teen Accounts,” which limit features such as live-streaming, filter sensitive material and restrict direct messages. Users can appeal if they are misclassified, but the system is designed to err on the side of caution.
To help ensure as many teens as possible are enrolled in Teen Account settings on @instagram, we’re using AI to detect teens, prompting parents to confirm their teens’ ages online, and giving parents information about the importance of teens providing accurate ages online.… pic.twitter.com/BYgRgA35KD
The X post shows Meta’s push to present age checks as a supportive measure for teens, while also encouraging parents to play a role in how young people use social media.
Meta’s regional policy director, Mia Garlick, said,
“We’ve spent many years and invested heavily to refine our AI technology to identify, in a privacy-preserving way, whether someone is under or over 18”.
Even so, Meta has argued that age checks should sit with app stores like Apple’s App Store and Google Play to create a more consistent solution across the industry.
The government has stressed it does not expect platforms to check every user. eSafety Commissioner Julie Inman Grant said,
“These social media platforms know an awful lot about us. If you have been on, for example, Facebook since 2009, then they know you are over 16. There is no need to verify. We don’t expect that every under-16 account is magically going to disappear on Dec. 10. What we will be looking at is systemic failures to apply the technologies, policies and processes.”
Still, experts are wary of the risks. The National Institute of Standards and Technology (NIST) has cautioned that AI models can create privacy hazards, such as re-identifying individuals from supposedly anonymised data or leaking information from model training. That training data, if exposed, could be a rich target for attackers.
The accuracy of age estimation also remains shaky. It can involve behaviour analysis or even facial recognition, both of which are open to manipulation. It’s important to point out that deepfakes and spoofing methods could bypass such safeguards, undermining confidence in the system and potentially lulling users into a false sense of safety.
Still, cybersecurity experts remain sceptical about the effectiveness and safety of AI-powered age verification systems. Pieter Arntz, a malware intelligence researcher and former Microsoft MVP in consumer security, questions whether age verification genuinely protects children or merely poses another privacy risk.
Joel R. McConvey, writing for Biometric Update, argues that AI-based age inference is “bound to raise new concerns about data privacy, censorship and surveillance”. Simply monitoring behaviour to guess someone’s age amounts to surveillance, and the resulting data could be used for purposes beyond age checks.
As Australia moves into this new regulatory space, the tension between child safety and personal privacy will only grow. Protecting kids online is a goal few would dispute, but the methods chosen could reshape how much of our digital lives are monitored, stored and scrutinised. What happens in Australia will set the tone for how far governments and tech giants are prepared to go in the name of safety.
Get the stories that matter to you. Subscribe to Cyber News Centre and update your preferences to follow our Daily 4min Cyber Update, Innovative AI Startups, The AI Diplomat series, or the main Cyber News Centre newsletter — featuring in-depth analysis on major cyber incidents, tech breakthroughs, global policy, and AI developments.
Sign up for Cyber News Centre
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead.
Australia has gone all-in on quantum, betting billions on PsiQuantum’s Brisbane facility while building alliances and spin-outs from Sydney to Chicago. With defence contracts, investor momentum and Five Eyes strategy at stake, Canberra’s gamble is to lead, not follow, in the quantum race.
ASIO’s $12.5 billion espionage warning is more than a tally of stolen secrets. It reveals a national digital crisis. With 24 major spy operations disrupted and identity systems exposed, Australia’s critical infrastructure and social services face a growing risk of collapse from unseen cyber threats.
Trump administration unveils comprehensive AI cybersecurity action plan establishing AI Information Sharing and Analysis Center for threat intelligence. Australian regulator ASIC sues Fortnum Private Wealth over cybersecurity failures that exposed client records on dark web.
Australia has become one of the first countries to mandate AS IEC 62443 standards by law, transforming healthcare cybersecurity into a legal obligation. The move marks a critical shift toward operational resilience and positions patient safety at the center of cyber strategy.
Where cybersecurity meets innovation, the CNC team delivers AI and tech breakthroughs for our digital future. We analyze incidents, data, and insights to keep you informed, secure, and ahead. Sign up for free!