Periodic notice Report 2: A snapshot of eSafety's findings
This page covers the main findings from eSafety’s second periodic report about child sexual exploitation and abuse material and activity on online services.
This report covers the period from 1 January 2025 to 30 June 2025. It summarises information provided in response to periodic reporting notices given to Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp.
On this page:
About this reporting series
These reports are produced to support transparency about compliance with the Australian Government’s Basic Online Safety Expectations (‘the Expectations’ or ‘BOSE’) which are intended to help keep Australians safe while using online services.
Among other things, the Expectations require service providers to take reasonable steps to minimise child sexual exploitation and abuse material and activity (CSEA) on their services.
A key step is detecting CSEA – either before it is uploaded or shared on a service, or immediately after it is provided on the service. Our regulatory guidance has further details.
On 22 July 2024 eSafety gave notices to eight service providers requiring them to report every 6 months for a two-year period on their compliance with the Expectations, focusing on child sexual exploitation and abuse material and activity. This includes grooming and sexual extortion of children and adults. Questions were also asked about adult sexual extortion, for context.
The service providers were Apple, Discord, Google, Meta, Microsoft, Skype, Snap and WhatsApp.
The findings from the first reporting period (15 July 2024 to 15 December 2024) provided a baseline for comparing later reports.
This second report covers the period from 1 January 2025 to 30 June 2025.
Note: As Skype (Consumer) stopped operating on 5 May 2025, Report 2 is the final report of this four-part series that includes information relating to Skype.
Key findings from report 2
eSafety has observed a number of online safety improvements from all service providers when compared to the responses for the first reporting period.
This shows that providers are taking some steps to improve the safety of their services, and investing time and resources into improving or expanding the tools they use to detect CSEA.
Despite these improvements, eSafety considers that there are still significant safety gaps on these services. eSafety remains particularly concerned by the lack of action by industry to:
- proactively detect new CSEA images and videos
- stop live online CSEA from occurring in video calls
- take steps against the sexual extortion of children and adults.
eSafety calls on the online industry to address these safety gaps and take steps to stop harmful CSEA material and activity on their services.
For more detailed analysis, see our interactive transparency summary.
Ongoing safety gaps: more work to be done to proactively detect CSEA
Click or tap on the + to find more information about how providers responded.
Some providers are still not proactively detecting sexual extortion of adults and children on their services
Sexual extortion is a form of blackmail where someone threatens to share a nude or sexual image or video (which could be real or artificially fabricated) unless the person targeted gives in to their demands, usually to give the blackmailer money or additional intimate material.
Sexual extortion of someone under the age of 18 is a form of CSEA activity.
Various tools, including ones designed for language analysis, are available to services to detect sexual extortion and stop this illegal activity. However, not all services were using these tools and not all tools were calibrated to protect children as well as adults.
Apple did not use language analysis technology to detect sexual extortion on any of its services, instead, it relied on nudity detection tools to warn users and encourage user reporting.
Of concern to eSafety, Discord was no longer using language analysis technology in direct messages to detect sexual extortion of children after previously trialling these tools in 2024, instead taking measures to identify harmful communities, rather than specific instances of sexual extortion.
Google did not use language analysis technology to detect sexual extortion on Google Chat, Google Meet or Google Messages (Google only used tools on YouTube).
Microsoft did not use language analysis technology to detect sexual extortion on Teams (Microsoft only used tools on Xbox).
Skype did not use language analysis technology to detect sexual extortion.
Snap did not use language analysis technology on Chats unless material was reported to them (Snap did use tools on other surfaces).
Some providers are still not proactively detecting new CSEA images and videos on their services
Tools can be deployed on services to detect the sharing of CSEA material when it is first created or shared, and before it has been verified and included in a database of ‘known’ images and videos. These tools help stop the spread of CSEA and alert providers to users who are engaging in this illegal activity.
While most providers were using tools to detect new CSEA on their services, some were not.
Apple did not use tools to detect new CSEA on its services. Although Apple’s Communication Safety tool detected nudity on iMessage and FaceTime video messages, it did not proactively detect and report CSEA images or videos to Apple. Instead, Apple relied on user reports of CSEA material made after an end-user received a ‘sensitive content’ warning, which were then assessed by Apple.
Google did not use tools to detect new CSEA on Google Meet, Google Chat, Google Messages or Gmail. Google did use tools on Gemini, Google Drive (for accounts flagged as suspicious only) and YouTube.
Microsoft did not use tools to detect new CSEA material on OneDrive, Outlook or Teams (Microsoft did use tools on Xbox).
Skype used an internal proprietary program to identify new CSEA, but only on high-risk live video calls.
Snap did not use tools to detect new CSEA in Private Stories and Chats, unless that material was reported to Snap (Snap did use tools on other surfaces).
Some providers are still not proactively detecting live online CSEA in video calls on their services
Live online CSEA is the transmission or receipt of live acts of sexual exploitation or abuse of children via webcam or video to people anywhere in the world, whether or not in exchange for payment. This can occur on video calling services as well as livestreamed services.
Research and analysis by the Australian Institute of Criminology in 2023 identified online video calling services as well as livestreaming services as a central vector for live online CSEA. Research by the International Justice Mission in 2023 highlighted that live online CSEA typically occurs via video calls. This allows offenders to conceal their abuse.
All providers offering services where users have the ability to make live video, either in one-to-many broadcasting or one-to-one video calling, should address the potential for that service to be used for live online CSEA.
Some providers were taking steps to detect live online CSEA, largely within public livestreams, using a number of different tools and technologies. Meta used tools to detect live online CSEA on Facebook and Instagram Live, and Google used tools on YouTube.
However, these providers did not use tools – or develop tools – on the video calling services they operated. This includes Messenger for Meta and Google Meet for Google.
Apple (Facetime), Discord, Microsoft (Teams), Snap (Snapchat) and WhatsApp also did not use any tools, or develop any tools, on their services to proactively detect live online CSEA, specifically in video calls.
While end-to-end encryption may make proactive detection more challenging, the lack of CSEA detection in video calls remains a serious safety gap. eSafety strongly encourages industry and the broader tech community to work together to develop technologies to detect this harm in the fight to end live online CSEA in videocalls.
How some providers were detecting potential live online CSEA in video calls
Skype
In the second reporting period, Skype implemented an internal proprietary program for video calls in certain regions to detect CSEA, including Australia, for high-risk users. If CSEA imagery was detected during the call, the video capabilities of the call would be disabled, preventing the users from further distributing CSEA. Skype’s tool was used from February 2025 until Skype stopped operating in May 2025.
While this was a welcome innovation, eSafety had been calling on Skype to implement tools to detect live online CSEA since 2022.
Google
YouTube used a combination of machine learning language analysis, video classifiers, and human moderators.
Meta
Facebook and Instagram Live used a combination of tools including Meta’s own internal proprietary classifiers.
Improvements in safety practices
eSafety has observed a number of online safety improvements from providers since the first report.
Click or tap on the + to read more about how providers responded.
More tools used to detect known CSEA images and videos
‘Known’ CSEA images and videos are those that have been previously assessed and confirmed to be CSEA. There are a variety of tools available to identify matches of these known images and videos. These tools play a vital role preventing the ongoing re-victimisation of children and adults, whose images and videos otherwise circulate endlessly online.
Microsoft
Microsoft reported that it expanded its use of hash matching tools for known CSEA images in email attachments in Outlook, to emails sent worldwide. In Microsoft’s previous report, it said it was only using hash matching on outbound email attachments sent from North America (that could go to Australian end-users).
Microsoft also expanded its use of hash matching tools in OneDrive to images and videos that were stored (since the reporting period) and shared in OneDrive. In Microsoft’s previous report, it was only using hash matching on images and videos that were shared in OneDrive.
Discord
Discord reported that it started hash matching to detect known CSEA videos (although Discord reported only using a limited form of matching). Discord was previously only detecting CSEA videos in GIF form.
Detecting nudity in end-to-end-encrypted environments, enabling user reports of CSEA material
User reporting can prompt service providers to remove CSEA in a timely manner and report it to appropriate authorities. It is a critical safety intervention for all services, but especially for those that have end-to-end encryption where there are additional challenges in deploying proactive detection tools.
Apple
Apple’s Communication Safety feature was enabled by default for accounts with users who declared themselves as being under the age of 13, with plans to expand this to users under the age of 18 in the near future. Other users could opt in to the safety measure. The feature detected nudity in images and videos on iMessages and FaceTime video messages.
After receiving a Communication Safety warning of potential nudity, users could report content to a trusted adult, or in iMessage, directly to Apple. Similar functionality was available through the Sensitive Content Warning feature which was designed for adult users. Both tools used on-device machine learning to analyse photos and videos. Material that was flagged as containing nudity were blurred before a user could view it. These classifiers operate alongside end-to-end encryption.
During the report period, Apple continued research and development work to expand these features to more of its services, including iCloud shared albums and in FaceTime video calls.
Google fully launched its Sensitive Content Warnings. This was an optional feature that was opt-out for users under 18 years and blurred incoming images that may have contained nudity before viewing, and then prompted users with resources and options. When an image that contained nudity was about to be shared or forwarded, it also reminded users of the risks of sending nude imagery.
Faster response times to remove CSEA material
The longer that CSEA is available on a service, the more likely it is to be accessed, seen or shared by multiple users, amplifying the impact on survivors and wider users.
The time taken to reach an outcome after CSEA is flagged is an important indicator of how effectively CSEA distribution is disrupted, so that immediate and ongoing harms to the victim-survivor are minimised.
Snapchat
The median time taken for Snap’s human moderators to reach an outcome after receiving a human report relating to potential CSEA material on Snapchat reduced from 1 hour and 30 minutes to 11 minutes. The time that CSEA material was available on Snapchat also reduced from 3 hours 21 minutes to 18 minutes.
Expanded industry information sharing to reduce CSEA across the sector
Consultation and cooperation across industry helps minimise CSEA material and activity online.
Programs such as the Technology Coalition’s ‘Lantern’ programme promote cross-platform cooperation by facilitating signal sharing of the activity and accounts that violate providers’ policies against CSEA. For example, signals can be email addresses, usernames, CSEA image hashes, or keywords used to groom children as well as buy and sell CSAM.
Similarly, the Take it Down program is a global hash-matching service operated by the US-based National Center for Missing and Exploited Children (NCMEC) which helps remove and prevent distribution of online nude, partially nude, or sexually explicit photos and videos of children (or that were taken before a person turned 18).
Google joined Take it Down and commenced ingesting signals from Lantern.
Meta
Meta used more sources for its lists of terms and language indicators to detect sexual extortion (NCMEC and Thorn). Language lists provide the most current likely terms used when sexual extortion is taking place so that a provider’s tools are able to detect sexual extortion when it occurs.
Safety research and development
During the report period, most providers undertook research and development to create or deploy new tools, or evaluate and improve existing tools on their services.
Click or tap on the + to read more about how providers responded.
Apple
Deploying Communication Safety on more services
Apple continued research and development of its Communication Safety feature (and Sensitive Content Warning) with the aim to expand the deployment of those features to more of its services.
Discord
Improving the detection of sexual extortion communities
Discord invested in expanding its internally developed server structure model to detect communities engaged in sexual extortion and other harmful behaviour (including live online CSEA).
Preventing users from clicking into off-platform CSEA material
Discord expanded its image scanning to scan embedded images from shared URLs. That is, if a message contained a URL which linked to an image, Discord made that image visible so a user did not need to visit the URL directly. These embedded images were proactively scanned and allowed Discord to identify and remove CSEA shared but not hosted on Discord via URLs.
However, Discord still did not use lists of known CSEA URLs to block these links from being posted on its service, despite joining the Internet Watch Foundation in 2023.
Improving proactive detection of CSEA material
Google worked to develop hash matching functionality for Gmail and Chat for video uploads attached directly to emails and chats.
Google also undertook research on the use of machine learning classifiers to detect new CSEA images and videos on Google Drive. The objective was to broaden machine learning classifier’s usage across Google services, where feasible.
Microsoft
Expanding the use of Community Sift
Microsoft researched how its tool, Community Sift, could best be implemented on Teams to detect grooming, with a focus on user messaging in Teams Communities. Community Sift is an AI-powered content moderation platform that uses artificial intelligence to classify, filter, and escalate user-generated content in real-time.
Blocking URLs linking to known CSEA material
Microsoft undertook work to detect and block URLs known to host CSEA on Outlook and Teams, using URL lists managed by the Internet Watch Foundation.
Improving Trust and Safety capabilities
Microsoft implemented artificial intelligence (AI) tools on Xbox, which enabled proactive detection and mitigation of a wide range of harmful content (including CSEA) on the Xbox service. The AI tools worked to improve Xbox trust and safety workflows and increase the efficiency of removal of harmful content. Microsoft stated that since its launch, the AI tools increased the speed at which harmful content was removed by 88%.
Snap
Improving the detection of known CSEA images
Snap worked on developing technology to enable better detection of known CSEA images uploaded to Snapchat after being edited using Snapchat’s creative tools.
Improving the detection of sexual extortion in communities
Snap developed a new, language-agnostic machine-learning model used to detect signals of sexual extortion and similar activity in communities (not messages). Language agnostic tools do not work on a set number of languages but instead rely on behavioural patterns and other non-linguistic signals.
About transparency reports
Under section 49(2) of the Act, eSafety can give periodic notices requiring service providers to report at regular intervals on their compliance with the Expectations.
Under section 56(2) of the Online Safety Act 2021 (the Act), eSafety can give non-periodic notices requiring service providers to report on their compliance with the Expectations.
Under section 20 of the Online Safety (Basic Online Safety Expectations) Determination, eSafety can request certain information from a service provider by written notice and a provider is expected to comply with the request within 30 days after the notice of request is given.
Published reports by eSafety
Read more responses to transparency notices on:
Last updated: 04/02/2026