Deepfake damage in schools: How AI-generated abuse is disrupting students, families and school communities

Understanding the emerging risks of deepfake technology – and how schools, parents, and young people should respond.

In this online safety advisory:

A growing concern for Australian schools

Deepfake technology – which lets users manipulate images and video via artificial intelligence – is no longer a future concern. It’s a current crisis affecting school communities across Australia. These tools, especially ‘nudify’ apps, can be exploited to generate non-consensual synthetic explicit images, including of children. They are increasingly in use among young people, often with devastating impact.

While the technology itself is free, fast, and easy to use, the harm it causes is deeply personal. 

Students have had found their image represented in fake nude photos or videos. Others have received AI-generated explicit content of their peers. Entire school communities have been thrown into turmoil – with families, educators, and students unsure how to respond.

In this advisory, we explain how these harms are happening, what actions schools and families can take, and where those affected can turn for help.

How does deepfake abuse happen?

‘Nudify’ apps and AI tools

Many deepfakes targeting young people are created with AI apps that appear to remove clothing from photos – turning everyday images into explicit material. These images can appear shockingly realistic, especially when viewed on small screens and shared quickly.

A person can be targeted even if they’ve never shared an explicit image. The image could be a regular selfie, a school photo, or a social media post. When misused, these apps can turn ordinary images into tools of humiliation, coercion, or blackmail.

Nudify apps are widely accessible, free or low cost, and often require no technical skill. They are increasingly marketed to younger users via social platforms and forums. This makes them not only dangerous, but also deceptively easy – turning impulsive or careless decisions into serious harm.

Fake images and videos    

Once created, deepfake images and videos can be shared as supposedly ‘harmless’ fun, a form of bullying, or as part of deliberate image-based abuse. 

Even when people know the content is fake, the emotional and psychological harm is very real – especially for teens who may feel exposed or powerless.

Some students are tricked or threatened using these fakes. Others are sent AI-generated content of their peers, causing distress, confusion, and gossip. Young people experiment without understanding the serious legal and personal consequences.

Who is affected?

Targets
Any student or school staff member can be targeted, regardless of what they post online or how they behave. All it takes is one image, the wrong app and someone who either recklessly or knowingly causes harm.

Targets often experience intense emotional fallout: humiliation, fear, anger, confusion. Some are scared to speak up, fearing they won’t be believed or will be blamed for what happened. They often also feel ashamed, even though they’ve done nothing wrong.

In some cases, the mere threat of a deepfake – even if no fake exists – is enough to cause distress or manipulate behaviour.

Bystanders
Young people may also witness this behaviour – in group chats, on social media, or via private messages. They may feel unsure whether to report it, what to say, or how to support their friends. They may also worry they’ll be next.

Creators
Some young people use these tools as a prank or experiment without fully understanding the impact. They may not realise that creating or sharing fake nudes, even synthetic ones, can be a serious criminal offence in some states and territories.

Why don’t young people speak up?

There are many reasons why a student might stay silent:

  • They don’t know where to turn for help: being unaware that eSafety can intervene.
  • Fear of public shame: being targeted in group chats or school gossip.
  • Disbelief: worrying no one will accept the image is fake.
  • Blame: thinking parents and carers will be angry or remove their devices.
  • Hopelessness: believing nothing will change, or that speaking up could make things worse.
  • Fear of overreaction: worrying adults will go to police, school authorities or even media before they are ready.

Others worry they’ll be told to get off social media entirely, which can feel like a punishment rather than protection.
The result? Many suffer in silence – just when they need support the most.

How to respond: advice for parents and carers

Start early and stay open

Talk regularly about the harms of deepfakes and that creating them may be a crime. Keep your tone supportive and not judgemental. If something ever happens, your child will be more likely to come to you.

Use supportive language

If your child is affected - as a target, bystander, or creator - your first words matter. Stay calm. 

Try language such as ‘I’m glad you told me’ and ‘Let’s figure out what to do together.’

If your child is a target

  • Help them collect evidence – screenshots, links, usernames (without saving or sharing explicit content).
  • Do not view, collect, print, share or store explicit material. Make a written description and note where it is located.
  • Support them to report the incident – to the platform, the school, local police or eSafety.
  • Check on their wellbeing and ask if they’d like professional support.
  • Reassure them: they are not alone and help is available.

If your child receives a deepfake

  • Praise them for not sharing it.
  • Talk about empathy and digital responsibility.
  • Reinforce that speaking up was the right thing to do.

If your child created or shared a deepfake

  • Stay calm and listen.
  • Explain the serious emotional and legal consequences.
  • Encourage accountability – deleting the content, apologising, or reporting it so platforms know to remove any copies.
  • Talk about respect, consent, and digital values.
  • Set clear expectations for future behaviour – and follow through consistently.

Advice for schools

Set expectations early

Build on existing digital literacy, respectful relationships, and consent education by including conversations about deepfakes and emerging technologies. Make sure this includes clear guidance on the use of AI tools and image-based abuse.

Talk to students about how consent applies in online spaces – and how technology can be used to fake or manipulate images. Help them understand that even when content is fabricated, it can still cause real harm.

Have a response plan

Use eSafety’s new guide for responding to deepfakes, part of our Toolkit for Schools, to help manage incidents. This resource is designed to work alongside your existing school and education sector policies and procedures. 

Prioritise the wellbeing of affected young people or staff above all other considerations. 

Establish clear steps for responding to deepfake incidents. Make sure all staff know how to support students, record what has happened, and work with families, police, and eSafety.

Ensure your school’s wellbeing and leadership teams are prepared and confident in handling these situations with care, consistency, and sensitivity.

Support student wellbeing

Recognise that these incidents can be traumatic. Make sure students who are affected have access to safe reporting channels and wellbeing support.

Engage the school community

Communicate with parents and carers about deepfake risks and school policies. Use newsletters, assemblies, or parent forums to raise awareness and reinforce prevention.

Encourage students to be ‘upstanders’ 

Help them know how to report abuse, support peers, and model respectful behaviour online.

What does the law say?

Understanding the law can help when you’re educating young people or supporting someone who has experienced image-based abuse.

However, legal action is only one part of the solution; it cannot address everything on its own. 

In some states, it is now a criminal offence to create or share AI-generated explicit content without consent, even if the material is synthetic.

For example, in South Australia, students as young as 16 can be prosecuted for creating or sharing humiliating or degrading deepfakes.

Where to report

  • To your school - if the incident involves other students.
  • To local police - especially if a crime may have occurred.
  • To eSafety - using the Report Abuse portal. Include the police event number if available.
  • Never share or save explicit content - instead, make a written record of links (URLS), account names, and descriptions for evidence.

Prevention is protection

The best protection is early education, open communication, and a strong support network. Young people need to know what deepfakes are, why they are harmful, and where to turn if something goes wrong.

As parents, educators, and community members, we can’t stop the technology – but we can give young people the skills and knowledge to navigate it safely, ethically, and with confidence.