
The 'Converge' blog series forms part of eSafety’s Tech Trends and Challenges program. Each blog in this series will consider the convergence of a technology and harm, building knowledge and contributing to critical conversations around emerging technologies and evolving harms.
This edition explores the intersection of generative AI and child sexual exploitation and abuse. It considers its prevalence, impact and the role of eSafety, industry and the broader digital ecosystem in reducing harm.
In this Converge blog:
- A convergence of innovation and exploitation
- The unseen threat: AI-generated CSEA is already here
- A double-edged sword in the fight against CSEA
- A growing crisis: Challenges in policing AI-generated abuse
- Proactive approach to combating AI-generated CSEA
- Toward a solution: Safety by Design and industry responsibility
- Ensuring a safer future for children in the AI era
- The time to act is now
- Resources
- Notes
A convergence of innovation and exploitation
Our world is a place where technology can help bring imagination to life – lifelike images, realistic voices, entire digital environments – all by tapping a device in the palm of our hands.
This is the promise of generative artificial intelligence (AI), a rapidly advancing technology that’s already reshaping all aspects of society. But what happens when these same powerful tools fall into the wrong hands?
Behind the potential of generative AI lies a disturbing reality: it is also being used by perpetrators to create child sexual exploitation and abuse (CSEA) material, weaponising the technology in ways most of us could never have imagined.
With AI’s ability to generate hyper-realistic content, perpetrators can now produce convincing synthetic images of abuse – making it harder for the ecosystem of stakeholders fighting this new wave of digital harm, including regulators, law enforcement and child safety advocates.
While responsibility must primarily sit with those who choose to perpetrate abuse, we cannot ignore how technology is weaponised. The tech industry must take responsibility to address the weaponisation of their products and platforms.
Our August 2023 position statement on generative AI examines the evolving landscape of this technology, looks at examples of its use and misuse, and offers answers to questions about online safety risks and opportunities.
As generative AI tools continue to become more widely available, we must confront the urgent question: how do we make sure the future of AI innovation protects children from harm and empowers them to engage safely online?
The unseen threat: AI-generated CSEA is already here
To understand the gravity of the situation, we must first recognise the misuse of AI for generating CSEA material is not a hypothetical risk – it is a present and growing threat. It also has significant ramifications and impacts for victim-survivors in both online and offline environments.
In 2024, the National Centre for Missing and Exploited Children reported an alarming 1,325% increase in reports involving AI-generated CSEA material – from 4,700 reports in 2023 to 67,000 reports in 2024.i eSafety investigators have also noted a 218% increase in reports of AI-generated CSEA material from 2023 to 2024 alone.ii
This convergence of technology and abuse highlights a dangerous misconception: that AI-generated CSEA material isn’t harmful because no ‘real’ child was involved.iii Yet the harm is real. Once created, this content can exist forever in the digital world, constantly resurfacing and causing psychological trauma to victims who may never even be aware of its existence until it’s too late.
For survivors of abuse, seeing synthetic versions of their suffering can reignite trauma and deepen their victimisation.iv Synthetic versions of CSEA also contribute to a culture that diminishes the seriousness of CSEA and its harmful impacts.
A double-edged sword in the fight against CSEA
However, there are benefits to AI. There are opportunities to use AI to detect and remove harmful material. For example, AI technology is used to identify new CSEA material, supporting platforms in removing this material, and aiding law enforcement by triaging suspected CSEA material for human review.
‘My Pictures Matter’ is an example of a crowdsourcing initiative to create an ethically sourced dataset of ‘safe’ childhood photos for machine learning research to counter CSEA. In order to be a ‘safe’ image, it must have been collected with full consent and must not contain child nudity, illegal activity, or depictions of violence or abuse.
Crowdsourced images will be used to train algorithms to recognise ‘safe’ images of children. Research will also be undertaken to consider how those technologies can be applied to AI that makes assessments on whether digital files contain ‘unsafe’ imagery of children. This initiative is led by AiLECS Lab (Ai for law enforcement and community safety), a collaboration between the Australian Federal Police and Monash University.
Machine learning models can flag harmful content faster than human investigators, and they offer the potential for early intervention. For example, prompting users with educational nudges when they attempt harmful searches.v
While we should recognise these advances, we must also confront the uncomfortable reality: the same technology that can protect children is being weaponised by perpetrators to exploit them.
Perpetrators are using AI to create and distribute true-to-life images of child abuse, sometimes based on images of victim-survivors, and sometimes using completely synthetic models of children.vi
A growing crisis: Challenges in policing AI-generated abuse
AI’s ability to create detailed and authentic-looking images means even innocuous photos of children can be manipulated to generate explicit material. Generative AI tools that can be used to produce explicit content – such as ‘nudify’ apps that digitally remove clothing – are proliferating online.vii They are widely available, and quick and easy to use. Many are also free. The accessibility of such platforms has also made it easier for children to create inappropriate or harmful material of their peers.viii
Synthetic images of children are being used for extortion and exploitation,ix creating complex challenges for regulators, law enforcement and child protection agencies in determining whether the image is AI-generated, AI-modified or authentic. This is incredibly important, because if it is authentic, it may depict a child in need of rescuing.
According to the Internet Watch Foundation, some AI-generated CSEA material is now ‘visually indistinguishable’ from authentic child abuse content.x This leads to resources being misdirected, as investigators chase after the safety of synthetic children while the children in our communities suffer abuse or remain in harm’s way.
Research by the Stanford Cyber Policy Center also found concerning anecdotal evidence that some perpetrators intentionally add flaws to non-synthetic CSEA material, in the hopes that law enforcement might misdirect their efforts by believing the material is AI-generated and purely synthetic.xi
Proactive approach to combating AI-generated CSEA
eSafety stays ahead of these emerging issues through our Tech Trends and Challenges work, which closely monitors the rapidly evolving landscape of AI technologies. By anticipating risks and challenges, our proactive approach guides our broader regulatory work.
The Online Safety Act 2021 (OSA) empowers eSafety to investigate and remove harmful material, including AI-generated CSEA, under defined categories of illegal and restricted content.
The OSA also gives eSafety the authority to issue legal transparency notices under the Basic Online Safety Expectations (BOSE, or the Expectations). The Expectations include a range of foundational steps that service providers are expected to take to ensure safety for their users. Such steps include holding service providers accountable for ensuring safe use of certain features of a service, such as generative AI as well as encrypted services, anonymous accounts, and recommender systems.xii
To date, eSafety has published five transparency reports. In some cases, these transparency reports have revealed that service providers are not doing enough.
The OSA also provides for industry codes and standards which set out mandatory requirements on key services, including those in the generative AI ecosystem. There are six such codes and two industry standards in operation.xiii Providers should be taking steps to comply. Where necessary, eSafety will use the full range of its enforcement powers to ensure compliance.
Toward a solution: Safety by Design and industry responsibility
The sheer scale and complexity of this issue demands a holistic, collaborative approach.
This means all industry actors in the digital ecosystem have an important role to play. In addition to AI companies playing their part, search engine providers should de-index platforms which have been explicitly created to generate CSEA material from appearing in their search results, and app stores should remove apps explicitly created to nudify.
Social media and messaging platforms should prohibit, detect, report, and remove AI-generated CSEA on their services and enforce their terms in relation to preventing the advertising of nudify services.
Technology companies must take a proactive role in embedding safety into the very foundation of their AI systems. eSafety’s Safety by Design initiative – which puts user protection at the heart of all stages of the product and service lifecycle – offers a clear pathway forward. By integrating child safety measures at every stage of the AI lifecycle, we can significantly reduce the risk of abuse.
Critical steps companies can take include:
- Responsible data curation: Companies should make sure training datasets are free of CSEA material. A study by the Stanford Internet Observatory revealed that some AI platforms had used a dataset containing known CSEA content, underscoring the need for rigorous data vetting. Content relating to children must be separated from adult sexual content in training data to limit the ability for models to generate CSEA material.
- Content transparency mechanisms including labelling and source watermarking: Embedding visible or invisible markers into AI-assisted, AI-enhanced or AI-generated content can help law enforcement differentiate between synthetic and real CSEA material, enabling more effective resource allocation.
- Strict and enforceable usage policies: Platforms must prohibit the generation of CSEA content and enforce these rules. This includes providing clear user reporting mechanisms and, where possible, implementing AI tools that detect and prevent the creation of such harmful material.
- Transparency and accountability: AI developers should work closely with regulators, sharing insights into how their models operate and what safeguards are in place to prevent misuse. Users and researchers should also have access to information.
- Reviewing and assessing products regularly: Addressing the risk of CSEA on a platform needs to be an ongoing and evolving process – not a check box review.
The steps are also relevant to obligations under the Industry Codes and Standards under the OSA, with steps 1, 2, 3, and 5 closely aligned with specific obligations applying to high impact generative AI services under the ‘Designated Internet Services’ Standard. These steps are also highly relevant to services’ expectations under the Basic Online Safety Expectations.
Two industry standards drafted by eSafety came into effect on 22 December 2024 in relation to unlawful and seriously harmful material such as CSEA. There are a range of obligations on services to detect and remove CSEA, and specific obligations on generative AI services that pose a risk of generating this material – such as nudify services – as well as the model distribution platforms that enable access to the underlying models.
eSafety can assess and investigate a service provider’s compliance with the relevant standards and is empowered with a range of enforcement options in cases of non-compliance. eSafety has said enforcement will begin after June 2025, after industry was given an implementation phase.
Regulatory guidance for the industry codes and standards can be found here: Regulatory schemes | eSafety Commissioner.
Ensuring a safer future for children in the AI era
As generative AI continues to evolve, so too will the methods used by bad actors to exploit it. While regulators strive to keep pace with these developments, we cannot afford to be complacent. Protecting and empowering children and young people online must remain at the forefront of discussions about AI ethics, development, and deployment.
Looking ahead, we must expect more sophisticated forms of exploitation, including a combination of generated audio and video, becoming more common. But with foresight and concerted action, we can shape a safer digital world – one where innovation and responsibility go hand in hand.
To achieve this, we need a united effort from across the digital ecosystem: the companies that develop and deploy AI, regulators, policymakers, academia, law enforcement, educators, parents and carers. Equally, we must elevate voices of children and young people in these discussions and embed lived experiences in policy.
Promoting a strength-based approach is critical to this work. This must be underpinned by education to support the development of critical digital literacy, including AI literacy.
The time to act is now
The rise of generative AI presents both an extraordinary opportunity and an unprecedented risk. If left unchecked, the misuse of this technology will continue to have devastating consequences for children.
However, with collaboration, transparency and accountability, and by embedding safety into every layer of AI’s development, we can harness its potential to create a safer digital future.
Whether you’re at the forefront of AI innovation, enforcing policy, or safeguarding children, the message is clear: we must not wait for the next crisis to spur us into action.
Resources
Report incidents of CSEA material located online to eSafety via our report form. Reports about persons acting inappropriately with a child online or seeking a child for sex can be reported to the Australian Centre to Counter Child Exploitation. If you believe a child is in immediate danger or risk call 000 or your local police station.
If you are experiencing online harm or abuse, whether or not generative AI is involved, you can make a report to eSafety.
You can also speak to a mental health professional from an expert counselling and support service.
Notes
I. NCMEC National Centre for Missing & Exploited Children, CyberTipline Report 2024, 2025, https://www.missingkids.org/gethelpnow/cybertipline/cybertiplinedata, page 11.
II. eSafety Investigations recorded a 218% increase in reports of AI-generated CSEA material from the 2023 calendar year to the 2024 calendar year.
III. Thorn and All Tech Is Human, Safety by Design for Generative AI: Preventing Child Sexual Abuse, July 2024, https://info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdf page 5
IV. Grossman S, Pfefferkorn R, Liu S, Stanford Cyber Policy Center, AI-generated child sexual abuse material: Insights from educators, platforms, law enforcement, legislators, and victims, May 2025, https://doi.org/10.25740/mn692xc5736 page 15.
V. Scanlan J, Prichard J, Hall L, Watters P, Wortley R, reThink Chatbot Evaluation, 2024, https://figshare.utas.edu.au/articles/report/reThink_Chatbot_Evaluation/25320859
VI. Internet Watch Foundation, How AI is Being Abused to Create Child Sexual Abuse Imagery, October 2023, https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/ page 23.
VII. Koltai K, Behind a Secretive Global Network of Non-Consensual Deepfake Pornography, Bellingcat, 23 February 2024, https://www.bellingcat.com/news/2024/02/23/behind-a-secretive-global-network-of-non-consensual-deepfake-pornography/
VIII. Centre for Democracy and Technology, Report – In Deep Trouble: Surfacing Tech-Powered Sexual Harassment in K-12 Schools, 2024, https://cdt.org/insights/report-in-deep-trouble-surfacing-tech-powered-sexual-harassment-in-k-12-schools/
IX. Raffile P, Goldenberg A, McCAnn C and Finkelstein J, A Digital Pandemic: Uncovering the Role of ‘Yahoo Boys’ in the Surge of Social Media-Enabled Financial Sextortion Targeting Minors, January 2024, https://networkcontagion.us/reports/yahoo-boys/ page 19; National Centre for Missing & Exploited Children, CyberTipline Report 2023, 2024, https://www.missingkids.org/cybertiplinedata page 5.
X. Internet Watch Foundation, How AI is Being Abused to Create Child Sexual Abuse Imagery, October 2023, https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/ page 6.
XI. Grossman S, Pfefferkorn R, Liu S, Stanford Cyber Policy Center, AI-generated child sexual abuse material: Insights from educators, platforms, law enforcement, legislators, and victims, May 2025, https://doi.org/10.25740/mn692xc5736 page 38.
XII. Other foundational steps that service providers are expected to take include minimising provision of unlawful and harmful material and activity such as child sexual abuse, grooming, and sexual extortion.
XIII. The industry codes apply to social media services, app distribution services, hosting services, internet carriage services, equipment providers, and search engine services. Two industry standards for designated internet services and relevant electronic services came into effect on 22 December 2024.