
This speech was delivered by eSafety Commissioner Julie Inman Grant at the National Press Club in Canberra, ACT.
Watch it via:
National Press Club YouTube channel
Opening remarks
It’s great to be here in Canberra at the Press Club once again.
The last time I stood at this podium was seven years ago when I talked about a very different online world to the one I’ll be canvassing today.
While 2018 doesn’t seem all that long ago, in tech terms, it’s not just a lifecycle, it’s a lifetime.
A rapidly evolving online world
I think you’d all agree that social media has massively evolved from its early days, reflecting trends towards short-form video, ephemeral media, livestreaming, and feeds curated by opaque algorithms, representing technological convergence that is blurring previously distinct lines.
And, shortly I’ll discuss how eSafety plans to implement Australia’s social media minimum age bill. But first I’d like to touch on some of the other remarkable changes we’ve seen the online world undergo, driven by rapid advances in technology, seismic shifts in user behaviour, and of course, the exponential rise of artificial intelligence.
Just as AI has brought us much promise, it has also created much peril. And these harms aren’t just hypothetical - they are taking hold right now.
In February, eSafety put out its first Online Safety Advisory because we were so concerned with how rapidly children as young as 10 were being captivated by AI companions - in some instances, spending up to five hours per day conversing with sexualised chatbots.
Schools reported to us these children had been directed by their AI companions to engage in explicit and harmful sexual acts.
Further, there is not a week that goes by that there isn’t a deepfake image-based abuse crisis in one of Australia’s schools. Back in 2018, it would have taken hundreds of images, massive computing power and high levels of technical expertise to create a credible deepfake pornographic video.
Today, the most common scenario involves harvesting a few images from social media and plugging those into a free nudifying app on a smartphone. And while the cost to the perpetrator may be free, the cost to the victim-survivor is lingering and incalculable.
And herein lies the perpetual challenge of an online safety regulator – trying simultaneously to fix the tech transgressions of the past and remediate the harms of today, while keeping a watchful gaze towards the threats of future.
A more powerful, more dangerous online world
There is little doubt the online world of today is far more powerful, more personalised, and more deeply embedded in our everyday lives than ever before.
It’s also immeasurably more complex and arguably much wilder. The ethos of moving fast and breaking things has been ratcheted up in the age of AI, heightening the risks and raising new ethical, regulatory, and societal questions – as well as adding a layer of uncertainty about what even the near future might hold.
But behind all these changes, some things remain the same.
Very few of these platforms and technologies were created with children in mind, or with safety as a primary goal. Today, safety by design is not the norm, it is the exception.
While the tech industry continues to focus on driving greater engagement and profit, user safety is being demoted, deprecated or dumped altogether.
Australia leading the way
So, while the tech industry regresses backwards, we must continue to move forward.
And Australia is doing just that. The Albanese Government’s Social Media Minimum Age law is the first of its kind to pass anywhere in the world, with overwhelming support across the Parliament and the states.
Numerous other countries are now hotly debating these issues, seeking effective age assurance interventions to correct this imbalance.
And, I can assure you, they are all beating down our door to find out just how Australia plans to take this bold regulatory action forward.
Mental health imperative
The relationship between social media and children’s mental health is one of the most important conversations of our time.
It naturally generates much debate and emotion. Therefore, it is important we ground these discussions in evidence and prioritise the best interests of the child from the start.
And, even more importantly, that we engage young Australians in these discussions throughout the policymaking and implementation process.
There is no question social media offers benefits and opportunities, including connection and belonging - and these are important digital rights we want to preserve.
But we all know there is a darker side, including algorithmic manipulation, predatory design features such as streaks, constant notifications and endless scroll to encourage compulsive usage, as well as exposure to increasingly graphic and violent online content.
Water safety for the digital age
These are indeed treacherous waters for children to navigate, especially while their maturity and critical reasoning skills are still developing. This is where we can learn so much from tried and tested lessons of water safety Australia has pioneered.
From backyard pools to the beach, Australia’s water safety culture is a global success story, a mixture of regulation, education and community participation, that reduces risk and supports parents in keeping their children happily and safely frolicking in the sea.
Picture any major beach in Australia and it will likely include the familiar sight of red and yellow flags fluttering in the breeze, children splashing in the waves, and lifeguards standing watch.
Parents keep a watchful eye too but are quietly confident in the knowledge their kids will be ok - not because the ocean is safe, but because we’ve learned how to live beside it.
We teach our kids to swim, and we give them the skills to recognise and master the ocean’s dangerous and often unseen currents.
We do everything we can to minimise the risks of drowning but we understand the sea represents such a powerful and vast force that we can never hope to totally eliminate the risks.
We have to take the same approach online. This cannot simply be about restriction but also about phased safety preparation, coupled with multiple forms of oversight.
We cannot totally fence off this vast digital ocean, but we can equip young people with essential survival skills to not just keep their heads above the waves, but to thrive and stay safer.
Building digital resilience
Teaching digital and algorithmic literacy is the closest thing we have to online swimming lessons - and eSafety has a vast repository of resources and programs to facilitate this learning today.
By helping children to think critically - to spot social engineering and deepfakes, and to heed those warning signs – we are setting our kids up for more online independence, resilience and enjoyment in their later teen years.
Delaying, not denying – what the law really means
Australia’s Social Media Age Restriction Act is one way we are planting those flags in the digital sand.
It’s also putting accountability precisely where it belongs - on the platforms themselves.
But calling it a ban misunderstands its core purpose and the opportunity it presents.
We are not building a great Australian Internet firewall, but we are seeking to protect under 16’s from those unseen yet powerful forces in the form of harmful and deceptive design features that currently drive their engagement online.
For that reason, it may be more accurate to frame this as a social media delay - giving children a reprieve from the persuasive pull of platforms engineered to keep them digitally entranced - and entrenched.
A forced delay was precisely the idea behind the 36 Months campaign, a concerted national effort to prioritise the mental health of Australian children by pushing for access to social media to be increased from the ages of 13 to 16.
Importantly, this new regulation will give us all precious time to build up our children’s online swimming skills, so that when they are ready, they’ll possess the confidence to swim against the tide of the harmful content flooding their social media feeds.
New research on children’s online harms
The potential risks to children of early exposure to social media are becoming clearer and I have no doubt there are parents in this audience today who could share stories of how it has affected their own children and families.
That is why today, I’m presenting some of our latest research for the first time which reveals just how pervasive online harms have become for Australian children.
We surveyed more than 2,600 children aged 10 to 15 to understand the types of online harms they face and where these experiences are happening. Unsurprisingly, social media use in this age group is nearly ubiquitous - with 96% of children reported having used at least one social media platform.
Alarmingly, around 7 in 10 kids said they had encountered content associated with harm, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating.
Children told us that 75% of this content was most recently encountered on social media. YouTube was the most frequently cited platform, with almost 4 in 10 children reporting exposure to content associated with harm there.
This also comes as the New York Times reported earlier this month that YouTube surreptitiously rolled back its content moderation processes to keep more harmful content on its platform, even when the content violates the company’s own policies.
This really underscores the challenge of evaluating a platform’s relative safety at a single point in time, particularly as we see platform after platform winding back their trust and safety teams and weakening policies designed to minimise harm, making these platforms ever-more perilous for our children.
Perhaps the most troubling finding was that 1 in 7 children we surveyed reported experiencing online grooming-like behaviour from adults or other children at least 4 years older. This included asking inappropriate questions or requesting they share nude images.
Just over 60% of children most recently experienced grooming-like behaviour on social media, which just highlights the intrinsic hazards of co-mingled platforms, designed for adults but also inhabited by children.
Cyberbullying and sexual extortion
Cyberbullying remains a persistent threat to young people but isn’t the sole domain of social media - while 36% of kids most recently experienced online abuse from their peers there, another 36% experienced online bullying on messaging apps and 26% through online gaming platforms. This demonstrates that this all-too-human behaviour can migrate to wherever kids are online.
What our research doesn’t show – but our investigative insights and reports from the public do - is how the tenor, tone and visceral impact of cyberbullying affecting children has changed and intensified.
We have started issuing “end user notices” to Australians as young as 14 for hurling unrelenting rape and death threats at their female peers. Caustic language, like the acronym KYS – short-hand for “Go Kill Yourself” - is becoming more commonplace.
We can all imagine the worst-case scenario when an already vulnerable child is targeted by a peer who doesn’t fully comprehend the power and impact of throwing those digital stones.
Sexual extortion is reaching crisis proportions with eSafety experiencing a 1,300% increase in reports from young adults and teens over the past three years.
And, our investigators have recently uncovered a worrying trend. We have seen a 60% surge in reports of child sexual extortion over the past 18 months targeting 13-15 year olds.
But we’ve also seen an increase in reports from 16-17 year-old boys. Again, demonstrating the importance of building resilience of this upper range of teens so that when they do come “back online” to social media, their defences are hardened to anticipate and meet these looming online threats.
Addressing generative AI harms
As I mentioned before, the rise of powerful, cheap and accessible AI models without built-in guardrails or age restrictions are a further hazard faced by our children today.
Emotional attachment to AI companions are built-in by design, using anthropomorphism to generate human-like responses and engineered sycophancy to provide constant affirmation and the feeling of deep connection.
Lessons from overseas have highlighted tragic cases where these chatbots have engaged in quasi-romantic relationships with teens that have tragically ended in suicide.
In the Character.AI wrongful death suit in the US, lawyers for the company effectively argued that the free speech outputs of chatbots should be protected over the safety of children, clearly as a means of shielding the company from liability.
Thankfully, the judge in this case rejected this argument – just as we should reject AI companions being released into the Australian wild without proper safeguards.
As noted earlier, the rise of so-called “declothing apps” or services that use generative AI to create pornography or ‘nudify’ images without effective controls is tremendous cause for concern.
There is no positive use case for these kinds of apps – and they are starting to wreak systematic damage on teenagers across Australia, mostly girls.
eSafety has been actively engaging with educators, police, and the app makers and apps stores themselves, and will be releasing deepfake incident management plans for schools this week as these harmful practices become more frequent and normalised.
What is important to underscore is that when either real or synthetic image-based abuse is reported to us, eSafety has a 98% success rate in getting this content down – and our investigators act quickly.
Our mandatory Phase 1 standards - which require the tech industry to do more to tackle the highest-harm online content like child sexual abuse material, will take effect this week, and will help us to force the purveyors and profiteers of these AI-powered nudifying models to prevent them being misused against children.
And our second phase of codes, which I will talk about shortly, will put in place protections for children from harmful material like pornography and will force providers of these AI chatbots to protect children from sexualised content.
Supporting parents and shifting the burden
Now, I know that was a lot to take in and I’m sure these trends are of deep concern to all of us. They also bring into sharp focus what we are attempting to do here and why we are attempting to do it.
In essence, we are seeking to create some friction in a system to protect children where previously there has been close to none. And in doing so, we can also provide some much-needed support for parents and carers struggling with these issues.
As the Prime Minister has often said, this legislation is not just about social media finally demonstrating social responsibility, it is about creating a normative change for parents.
It’s a constant challenge for parents having to juggle the urge to deny access to services they fear are harmful with the anxiety of leaving their kids socially excluded.
I can speak from lived experience with three teenagers that this sometimes feels like very effective “reverse peer pressure!”
While the social media delay won’t solve everything, it will create some friction in the system, and this will further reinforce some of the other measures I’m also talking about today.
Holding platforms accountable
Ultimately, this new world-leading legislation seeks to shift the burden of reducing harm away from parents and back onto the companies who own and run these platforms and profit from Australian children.
We are treating Big Tech like the extractive industry it has become. Australia is legitimately asking companies to provide the lifejackets and safety guardrails that we expect from almost every other consumer-facing industry.
eSafety is charged with implementing and evaluating this new law and at the halfway point we have made significant progress, which puts us well on track to meet the deadline of the 10th of December.
The stakes are high and we know the eyes of the world are upon us.
But I think it’s fair to say as eSafety approaches its 10th birthday, Australia has already led the world in its commitment to online safety and we are showing our world-leading credentials once again.
But, I want to be clear, this legislation does not give us a mandate to cut the Coral Cable or deplatform social media on app stores, nor should we create expectations that every child’s social media account will magically disappear overnight.
Our implementation of this legislation is not designed to cut kids off from their digital lifelines or inhibit their ability to connect, communicate, create and explore.
Far from it.
In that vein, I should also be clear that there will be no penalties for those underage children who gain access to an age-restricted social media platform, or for their parents or carers who may enable this earlier access.
The responsibility lies, as it should, with the platforms themselves and there are heavy penalties for companies who fail to take reasonable steps to prevent underage account holders onto their services of up to $49.5 million per breach.
Privacy, safety and rights working together
While there is still a lot that needs to be considered, a key principle as we approach implementation is the recognition that children have important digital rights - rights to participation, the right to dignity, the right to be free from online violence and of course, the right to privacy.
The Privacy Commissioner, Carly Kind, who is with us today, has a very important role here to monitor and enforce compliance with the privacy provisions set out in this legislation as well as those set out under the Privacy Act.
This is an incredibly important part of the regulatory puzzle and demonstrates that protecting privacy and safety need not be mutually exclusive. So, we will continue to work together in lockstep on both regulatory guidance and implementation.
Regulatory milestones and Ministerial action
There is no question that this is one of the most complex and novel pieces of legislation eSafety has ever implemented.
There are key milestones that have now been reached and more coming just over the horizon, that are bringing us closer to the minimum age obligation taking effect.
One of the pivotal steps will involve the Minister for Communications Anika Wells making rules on which platforms are included or excluded from the minimum age obligation.
While she couldn’t be here with us today, we have already seen that the Minister will bring great energy and insights to online safety policy going forward.
As many of you here would know, last week Minister Wells sought my independent safety advice on the draft rules. This was published yesterday for the purposes of full transparency.
It will now be up to the Minister to make the rules and have them tabled with Parliament to be considered through the usual parliamentary scrutiny process.
You would have also seen the release of the preliminary findings of the government’s age assurance trial.
The trial involves 53 participants and has been looking at the efficacy of a broad range of age assurance technologies and their suitability in the Australian context.
The trial has demonstrated an incredibly positive use case for AI - almost all of the technologies tested employ AI. Age estimation tools are harnessing this the most, but age inference and successive validation tools also use algorithmic analysis.
The key preliminary finding is that age assurance can be achieved in Australia in a manner that is private, robust and effective.
eSafety will start consulting upon our regulatory framework this week and will use this varied feedback to further hone our regulatory guidance.
Ensuring compliance and preventing circumvention
Under the legislation, captured service providers will have to satisfy me that they have implemented reasonable steps to prevent under 16s from having accounts on their service.
We will seek to ensure our guidelines are clear, effective, proportionate and fit for purpose. Over the next few months, we will be talking to over 150 stakeholders including industry, academics, advocates, rights groups, parents and most importantly, young people themselves.
These reasonable steps are likely to include the kinds of privacy preserving age assurance measures platforms need to have in place, or the level of proactive detection required by services to identify underage users.
We expect that this will require a range of methods and multi-signal age inference tools built into the captured platforms – this will not involve technology mandates but will likely require a waterfall of effective techniques and tools.
eSafety will need to be satisfied that platforms are going to identify the users under 16 on their services now and for the companies to be clear about how they will systematically prevent them from joining their services through effective approaches to age assurance at sign-up.
Many of these platforms are already using proprietary tools or trialling third-party solutions to do just this today.
And, despite some claims to the contrary, the technology exists right now for these platforms to identify under 16s on their services without having to age verify everyone on their platform.
From our consultation with industry thus far, we know at least one major platform has identified under 16s without having to age verify its entire user base.
We expect both the technology and cascading age assurance techniques will develop and become more widespread as we move toward the commencement date. Of course, using the least invasive and most privacy preserving approaches will be key.
The platforms covered under the scheme will likely need to satisfy me they are creating intuitive and discoverable user reporting functionality when underaged users are missed, and to actively prevent expected circumvention techniques, including via VPNs. Various teenage work-around attempts are inevitable but the tools also exist for companies to help pre-empt these end-runs.
Finally, these companies will be compelled to measure and report on the efficacy and success of their efforts so that we can further gather evidence and evaluate the success of these interventions.
Evidence and independent oversight
To that end, eSafety is dedicated to incorporating independent evidence and robust evaluation into our implementation, which will be separate from the broader legislated independent review. This will be valuable for us to ensure continuous improvement but enable us to build an evidence base and provide a blueprint for other nations that may want to learn from our approach.
eSafety conducted a merit-based process to form an independent Academic Advisory Group to bring rigor and objectivity in helping us evaluate and gather vital evidence. More than half of the advisory group members are Australian, with the remainder drawn from academic centres of excellence and expertise from across the globe.
I had the pleasure of convening the inaugural meeting of the Advisory Group earlier this month and I believe the outputs will put us in good stead to objectively measure both the benefits and unintended consequences of the implementation.
Codes, standards and the broader tech ecosystem
As important as social media age limits will be in helping delay children’s exposure to harmful design features, we are all well-aware that it won’t be a silver bullet.
That is why eSafety continues to take a holistic approach to protecting, supporting and empowering Australian children online.
We remain committed to working with teachers, parents, carers and young people through our Youth Council, to not only ensure they are well-informed about risks, but also well-equipped to thrive online.
And this means building upon our current digital learning arsenal at esafety.gov.au by developing further co-designed digital literacy and resilience resources.
And of course, eSafety’s reporting schemes will still be there to provide assistance for Australians if things do go wrong online, whether through cyberbullying or deepfake image-based abuse.
Today, I am also announcing that through the Online Safety Act’s codes and standards framework, we will be moving to register three industry-prepared codes designed to limit children’s access to high impact, harmful material like pornography, violent content, themes of suicide, self-harm and disordered eating.
As I mentioned earlier, through these codes, companies agree to apply safety measures up and down the technology stack – including age assurance protections. These provisions will serve as a bulwark and operate in concert with the new social media age limits, distributing more responsibility and accountability across eight sectors of the tech industry.
I have informed industry I plan to register the codes covering enterprise hosting services, internet carriage services such as telcos and other access providers and search engines. I have concluded that each of these codes provide appropriate community safeguards.
I have sought additional safety commitments from industry on the remaining codes, including those dealing with app stores, device manufacturers, social media services and messaging and the broader categories of Relevant Electronic Services and Designated Internet Services.
It's critical to ensure the layered safety approach which also places responsibility and accountability at critical chokepoints in the tech stack including the app stores and at the device level, the physical gateways to the internet where kids sign-up and first declare their ages.
I also asked industry to make changes across some of the codes to strengthen protections around AI companions and chat bots. I want to ensure these provide vital and robust protections.
Industry confirmed last week they would seek to make some of these changes, and I plan to make my final determination by the end of next month. If I am not satisfied these industry codes meet appropriate community safeguards, I will move to developing mandatory standards.
These will complement the codes and standards around illegal content and importantly, all codes and standards now carry the same heavy penalties for breaches as the social media bill.
Conclusion: Placing the flags in the sand
What gives me great encouragement is the deep community support that exists across Australia for stronger online protections for children - both from harmful content but also from features deliberately designed to make social media addictive.
Australia is proud to be taking a national approach to setting age restrictions for social media services. And while we may be among the first, I’m confident we won’t be the last. We’re working closely with our international counterparts that share our goals.
In true Aussie spirit, we’re having a go - not because it’s easy, but because it matters. It’s a bold move, yes, but every big change begins with someone willing to take that first step.
Australia is building a culture of online safety, using multiple interventions - just as we have done so successfully on our beaches.
Because the internet, like the ocean, will continue to be a fixture in our lives. And our children, whether we like it or not, are already dipping their toes in the surf.
And as we approach commencement, there is a tremendous opportunity for parents to start having conversations now with their kids about the new rules.
We are publishing information on our website about how to start some of these more complex discussions with our children, to prepare them for change.
This will include resources for parents about understanding VPNs and other circumvention techniques as well as guides to show kids how they can download their archives and profile content before shutting down their accounts.
I believe that this can serve as a signal to all parents that they can take tangible action with their kids to “start the chat and delete the apps” - and to reinforce that the government is backing them.
So, let’s make sure the flags are up, the lifeguards are on duty, and the rips are clearly marked - because when it comes to keeping our kids safer online, we all have a role to play.
Thank you very much.