Need help dealing with violent or distressing online content? Learn more

eSafety Strategy 2022-25

The eSafety Strategy outlines how we will prioritise our activities to help Australians of all ages enjoy safer and more positive experiences online through until 2025.

It explains:

  • our vision, mission and values
  • the role of eSafety
  • our enforcement powers under the Online Safety Act
  • our strategic priorities
  • our regulatory posture
  • our strategic goals and how we plan to achieve them.

The eSafety Strategy is given context by our future-focused strategic outlook, which outlines our likely operating environment over the next 5 to 10 years. Our success as a regulator and educator depends on our ability to identify key trends and issues, along with the potential impacts of emerging technologies. This understanding is essential to our ability to prevent and reduce the impact of online harms.

Download the PDF file or read the strategy on this page.

Strategic outlook

This strategic outlook provides a long-term view of how eSafety intends to counter online harms within our operating environment.

It includes insights into the trends, issues and technological developments likely to impact online safety and regulation of the technology over 5 to 10 years, starting in 2022.

We have looked at the operating environment through six lenses:

There is no question that rapid technological developments and evolving ‘Web 3.0’ decentralisation mean that eSafety needs to remain nimble and proactive to harness potential opportunities and avert looming threats.

As such, this strategic outlook will be reviewed and updated regularly, to ensure we continue to anticipate issues and achieve the goals set out in the eSafety Strategy.

Technology features

Platform or service design features intended for one purpose may have significant but unintended consequences once deployed, contributing to online harms. eSafety will consider how existing and new technologies may cause mental and physical harms. 

For example, in the context of child sexual exploitation and abuse, design vulnerabilities are generating a significant increase in these devastating crimes. In particular, livestreaming features, which may be end-to-end encrypted, are being exploited to facilitate on-demand child sexual abuse. 

More commonly, a range of mainstream platforms and services allow adults and children to co-mingle, without age or identity verification. This enables grooming by sexual predators, sexual extortion and other forms of social engineering used to manipulate children. 

eSafety is seeing through our regulatory reporting schemes how this is causing serious harms. We are therefore monitoring and assessing platform and service design features for these unintended consequences. 

eSafety’s position is that platforms and services should be designed with safety in mind at the outset and not as an afterthought or after the harms have occurred. In addition to end-to-end encryption and livestreaming features, we are focusing our attention on algorithms, artificial intelligence, anonymity, and identity shielding, as well as looking further into the future at the risks of virtual and augmented reality (VR/AR).

eSafety will also assess how the use of cryptocurrency and blockchain technology may make child sexual exploitation and abuse more difficult to investigate.

Algorithms and artificial intelligence

Artificial intelligence (AI), machine learning and algorithms can have tremendous benefits for businesses and for individuals. For example, online services can use algorithms in advanced content moderation systems and detection tools to reduce online harms, by helping to identify and filter out seriously harmful content at-scale. 

However, algorithms can also create or contribute to a variety of harms. This can result from intentional efforts to make services ‘sticky’ so people stay engaged on a specific platform, or it may be an unintended consequence of human input – whether through unconscious bias or a curious click and subsequent algorithm-led recommendations leading a person to increasingly extreme and harmful content.

The Online Safety Act 2021, through the Basic Online Safety Expectations determination, gives eSafety an avenue to drive greater algorithmic transparency and accountability. It enables us to require reporting from providers about how their algorithms may be contributing to or reducing the impact of online harms. 

Algorithmic transparency and regulation through technical audits are also being considered internationally, and eSafety is working with specialist agencies to assess our own technical capability for building this investigation and regulatory capacity.  

eSafety aims to remain at the vanguard of online harms regulation. We will continue to keep a watching brief on these developments and involve ourselves in conversations across borders and across disciplines to promote a harms minimisation approach to the application of algorithms and AI. 

We will also work closely with our Australian counterparts, including the other members of the Digital Platform Regulators forum – where we join forces with the Australian Communications and Media Authority, the Australian Competition and Consumer Commission and the Office of the Australian Information Commissioner – to enhance our regulatory capabilities, including in relation to the assessment of existing and emerging technology. 

Anonymity and identity shielding

Anonymity and identity shielding allow an internet user to hide or disguise their identifying information online. While this safeguards privacy and can protect people from violence or abuse, it can also make it difficult to hold perpetrators accountable for harm they cause online.

Many people see anonymous communication as a cornerstone of freedom of expression online. The fact that an internet user is not immediately identifiable to others is not the problem. The challenge is that real or perceived anonymity may contribute to a person’s willingness and ability to abuse others – and to do so without being stopped or punished.

eSafety recognises the need for a balanced approach to enable people to access the benefits from a degree of anonymity while reducing the risk of them abusing others. We would like to see platforms and services do more to deter abuse, empower victims to prevent and deal with abuse, and make sure users who perpetrate serious or ongoing abuse are held responsible for their actions. 

The Act sets out the Australian Government’s expectations for how online services keep users safe. One focus area is preventing the misuse of anonymous accounts at the systemic level through the Basic Online Safety Expectations

Our investigative schemes show that anonymity and identity shielding are commonly used to evade consequences for abusive behaviour and the Act has provided eSafety with associated information-gathering powers. We plan to use this regulatory tool to inform our investigations and hold perpetrators to account, while also making sure companies meet their responsibilities when their policies are being violated. 

Cryptocurrency and child sexual exploitation and abuse

Cryptocurrencies have existed in their modern form since 2009. The total market capitalisation of global cryptocurrency peaked at about USD 3 trillion in November 2021. As of July 2022, it stands at USD 866 billion.  

Cryptocurrency has become the preferred payment option for the trade of online child sexual abuse material, as credit card facilities have been almost entirely blocked from allowing such transactions. 

Cryptocurrency may seem to be anonymous and untraceable. However, the take down of two commercial child sexual exploitation material sites – Welcome to Video and Dark Scandals – demonstrates how examining currency addresses, analysing the blockchain and identifying indicative payment patterns (such as value and time of day) can allow law enforcement agencies to identify individuals engaged in illegal practices.  

Through our future-focused tech trends work and engagement with partners, eSafety will continue to review and assess cryptocurrency and how people use it to fund illegal content and impact online safety.

Blockchain and child sexual exploitation and abuse

Blockchain is an ‘immutable’ digital ledger that allows people to record transactions and track assets without the data ever being altered or deleted. The technology emerged in 2011 and it has often been touted as having the potential to revolutionise industries and be as disruptive as the formation of the internet. However, more than 10 years later, the practical use of blockchains is limited and concentrated mainly in the financial services sector.  

As well as recording digital events such as cryptocurrency transfers, blockchains permanently record data uploaded to the ledger, such as short messages and pictures. This presents the potential for illegal content, such as child sexual exploitation material, to be stored in the blockchain. Since it’s difficult by design to remove content once it’s uploaded, preliminary examinations of the issue and potential solutions have focused on measures to counter content insertion. This includes applying fees to insertions of large transactions, which are likely to contain image or video uploads.   

eSafety will consider how the permanent recording of digital events may require proportionate regulatory responses. 

International developments in policies and regulation

As Australia’s online safety regulator, eSafety focuses on protecting Australians from online harms. As the internet has no borders and the majority of our regulatory targets are domiciled overseas, we need to understand the international landscape and get involved in global discussions. 

While eSafety has been working in the online safety space since 2015, elsewhere there is now growing global recognition that self-regulation by online platforms has failed to protect individuals and communities from online harms. 

In response, more governments are legislating to set the parameters within which tech companies operate. Europe, Canada, Singapore, and the UK are progressing legislation to regulate digital platforms’ systems and processes and to transfer responsibility for user safety back to the tech sector. This includes requirements to undertake risk assessments, develop mitigations and change practices to operate safer services, or face large civil penalties. 

Online safety – specifically child online protection – is now a main feature in many UN and multilateral forums. The focus is on building capacity and supporting developing countries to introduce legislation, develop national strategies and implement prevention programs. 

This pivot towards online safety regulation has led to increased demand on eSafety to provide guidance, help build international capability and make sure regulatory approaches are consistent.

Over the next 5 to 10 years, we expect to see regulatory regimes introduced or updated across Australia, G7 nations and throughout the Indo-Pacific region. 

We welcome progress among like-minded countries, but progress comes with challenges. Countries are already grappling with how to keep ahead of tech trends and challenges, such as encryption, immersive technologies, quantum computing, cryptocurrency and decentralisation. 

Tech regulation has implications for a range of digital rights, including freedom of expression, privacy, safety, dignity, equality, and anti-discrimination. It is further complicated by authoritarian regimes increasingly using online safety regulation to pursue arbitrary or unlawful politically motivated content censorship, often under the pretext of national security. Democratic governments have a responsibility to show how to govern tech platforms in a manner that both minimises harm and prioritises and reinforces core democratic principles and human rights. However, no one country can set the precedent on its own.

Securing coordination across jurisdictions to avoid a fragmentation of online safety legislation and governance arrangements will remain a priority. So too will increased international collaboration, where regulators can work side by side, coordinating and sharing the regulatory challenge where technologies impact our citizens across borders. 

While this will be challenging, it could lead to greater enforcement opportunities, where aligned regulators can – subject to relevant laws – share intelligence and insights and work together to address systemic failings in systems and processes.

We also expect to see international standards emerge over this period, with regional and multilateral concerns influencing their development. 

eSafety will work with international partners to make sure the regulation of online platforms is based on democratic principles and human rights, and promotes cooperation, collaboration, and harmonisation across jurisdictions.  

In doing so, eSafety can capitalise on the global shift towards platform regulation and position itself as a global leader in online safety policy and practice.

Evolving harms

Perpetrators of family and domestic abuse increasingly use technology to coerce, control and harass their current or former partners. This is known as technology-facilitated abuse, or tech abuse. Often it is an extension of domestic and family violence (DFV) that targets women and children. 

The most common forms of tech abuse are monitoring and stalking, psychological abuse, and physical threats. Examples include: tracking of movement through various mobile phone features and applications such as location sharing; non-consensual monitoring of emails, social media and mobile communications; humiliation or punishment through the sharing of intimate images online; direct messages of abuse or threats of violence or humiliation.

This type of abuse causes real harm, impacting a person’s mental and physical health, relationships, and everyday activities. 

In response, eSafety developed the world’s only specialised government program supporting women experiencing tech abuse in 2016. In the future, we will continue to draw on the latest evidence, strategies and techniques to support people experiencing this form of abuse.  

eSafety will also continue to deliver exceptional training programs to the domestic and family violence sector, as well as to allied health professionals, law enforcement agencies and the court system.

Harm prevention initiatives

The internet was created by adults for adults, but it is young people who inhabit the online world more fully and now shoulder the primary burden of online risks. We need to flip the burden of responsibility so that large tech companies are expected to lead online safety and embed safety features into the design and development of their products. 

Age assurance processes are an important first step, informing services that a prospective or current user is likely to be a child. This allows the services to take reasonable steps to reduce a range of risks and harms and create a safer online environment for that child. 

Age assurance is an umbrella term for measures which determine the age or age-range of a user. At one end of the spectrum there are age estimation measures which assess age with a relatively low level of certainty, such as asking a person to declare their date of birth. At the other end, there is a range of more robust age verification measures which determine age to a much higher level of confidence.

eSafety has started to explore the range of these measures, at the Australian Government’s request, by developing a roadmap for a mandatory age verification regime for online pornography. 

Age assurance measures and subsequent reasonable steps for keeping children safe – such as requiring safety and privacy settings by default – are also key elements of the Basic Online Safety Expectations.
 

New and emerging technology

Immersive technologies and the metaverse

Immersive technologies allow humans to experience and interact with digital content in three-dimensions in a way that looks, sounds, and feels almost real – often described as ‘hyper-realistic’. These technologies include augmented reality (AR), virtual reality (VR), mixed reality (MR) and haptics, which stimulate your sense of touch. Although haptic suit prices are out of reach for most of today’s consumers, they promise a full-body sensory experience.

Some immersive technologies work by blending the virtual and actual worlds. Others create a highly interactive sensory experience, engaging you through touch, sound, and visual content. By providing hyper-realistic experiences, immersive technologies could increase the impact of negative interactions, and lead to a rise in online assaults and other forms of sexualised abuse. 

eSafety is working with industry, users and advocacy groups to identify risks, raise awareness, embed protections, and provide complaint pathways to make sure everyone can enjoy the benefits of immersive technologies.

Decentralisation in a Web 3.0 world

There is growing interest among the tech community in moving to a more decentralised internet. This ‘DWeb’ or ‘Web 3.0’ shift would seek to distribute responsibility for data, along with decision-making about how it can be used, away from the concentration of large technology companies that currently serve as ‘gatekeepers’ to the internet. Instead, control would be shared among communities of users, each with their own rules of governance defining the rights and obligations of its members.

While a more decentralised internet could allow users to better protect their information and control their online experiences, it could also make it more difficult to hold users (or the entities behind them) responsible for harmful content and activity, given the absence of centralised servers and authority. 
To be socially responsible, decentralised services must be designed to protect safety, via pathways to deal with harm, as well as building in privacy and security. 

eSafety is working to make sure safety considerations feature in discussions about decentralisation, so those working on decentralised technologies assess the safety risks, inform users about those risks, and take reasonable steps to reduce or eliminate those risks through Safety by Design.

Quantum computing

Quantum computers will harness the unique behaviour of quantum physics and apply it to computing, introducing the potential to perform data operations millions of times faster than other computers. This has the potential to bring tremendous scientific and social benefits to our everyday lives.

The broad adoption of quantum computing is some time off – a fully functioning quantum computer does not yet exist – but government agencies and industry groups are also aware of the potential applications and risks of the technology. A crucial focus area is addressing quantum computing’s ability to break the cryptographic algorithms which secure today’s online activity. We will need new algorithms to make sure people’s data is safe and to meet private and public sector objectives.     

Quantum computing may have most potential in improving artificial intelligence, which relies on processing huge amounts of complex datasets. It raises the potential of AI generating new data it hasn’t seen before in order to make a decision, providing a level of speed and specificity in solving complex problems that are beyond the capability of current computers. This presents opportunities in areas such as fraud detection and could be applied to solutions focused on identifying harmful online content and behaviours.   

While some years away, eSafety will need to understand how the use of quantum computing may or may not present online safety issues. 

Opportunities

Encouraging safety tech initiatives

Safety tech providers create solutions to keep users safer online. These include machine learning and artificial intelligence technologies, classifiers, filters, detection monitors and age assurance tools.  

In May 2021, the UK Government published a report exploring the emerging safety tech sector, finding: 

  • there are at least 100 dedicated safety tech businesses in the UK 
  • these businesses employ 2,200 full-time equivalent staff 
  • total annual revenues are estimated at £314 million and could exceed £1 billion by the mid-2020s. 

A similar report in the US by Paladin Capital Group published in January 2022 found that more than $1 billion in external investment has been raised towards safety tech. The report noted the sector’s ability to counter threats from hostile governments or groups that create conflict and public distrust through online harassment, misinformation and deception. 

Australia’s safety tech industry is comparatively small. However, there are opportunities to foster domestic innovation in the sector, further cementing Australia as the global leader in online safety while creating jobs and enabling a modern digital economy to drive our future prosperity.   

We want to work to help build this capability and make sure Australian tech companies are demonstrating best practices by being mindful of Safety by Design as a core consideration.

Challenging business models

There has long been concern about the business model of capturing and monetising personal information to drive massive revenues for some big technology businesses. This is often referred to as ‘surveillance capitalism’.

There is a growing movement in Australia and around the world to safeguard consumers from invasive targeted advertising and particularly to better protect children by limiting the collection of their personal data and making sure online services are safer and ‘age appropriate’.  

Beyond the Facebook-Cambridge Analytica scandal and the US Federal Trade Commission Consent Decree against YouTube for persistent tracking of children across multiple channels without proper parental consent, there have been  concerns raised more recently about Tik Tok’s collection and sharing of personal data, including sensitive biometric information. 

eSafety is concerned that the capture of this sensitive data can put children and other vulnerable communities in social, psychological and even physical harm if such data is not properly collected, secured and protected.  This is an area where privacy and safety harms clearly intersect.

Also, the drive by companies in the ‘attention economy’ to enable conflict or provide more extreme or harmful forms of material on their platforms to create and maintain user interest is a major concern. In these scenarios, safety protections could be considered to create ‘friction’ and inhibit virality and user retention.

Engaging graphical user interfaces and features designed to entice young users to ‘stay connected’ is a related concern, particularly around self-regulation, mental health and balanced technology use.  

Companies that make it easy to set up multiple accounts or fake or imposter accounts – or fail to tackle spam and bots so they can maintain their active user numbers – can also create more pathways for online abuse.

It is important regulators remain vigilant about the safety risks these models present.  

We need to hone our regulatory tools to make sure the capture of increasingly sensitive data such as biometrics, emotional sentiment and behavioural clues does not lead to even more serious online harms. This is an area where safety and privacy concerns clearly intersect.

eSafety Strategy 2019-2022

Last updated: 16/12/2022