Find out about social media age restrictions for Australians under 16. Learn more

Empowering users to stay safe online

Empowering users to stay safe online
Empowering users to stay safe online

Prioritising users’ best interests in the products and services you build can help support and strengthen user safety.

Service design, including safety and reporting features, can influence how users behave. To encourage positive online behaviours, it is important to empower users with clear information and easy-to-use tools. These features should consider accessibility requirements to protect those who are more vulnerable to online harm. 

This section outlines how you can embed users’ rights and safety into your online products and services.

On this page:

About community guidelines

Community guidelines (or standards) define expectations and common codes of conduct for users of a platform or service. They should communicate what is and is not allowed on the service, both in terms of material and activity. They are also important for explaining why action may be taken when a user violates the guidelines. 

Having community guidelines helps to encourage a positive online culture. Ideally, community guidelines should be easy-to-understand and regularly updated to reflect changes happening online and offline worldwide.

It is important for users that companies commit to implementing these guidelines fairly and consistently.

Community guidelines are often linked to a platform’s ‘terms of use’, so providers should clearly explain how they are related.  

Tips for creating community guidelines


  1. Define online harms

    A good starting point is the World Economic Forum’s typology of online harms. These harms are categorised as:

    • threats to personal and community safety
    • harm to health and wellbeing
    • hate and discrimination
    • violation of dignity
    • invasion of privacy
    • deception and manipulation.

    Learn more about online harms and how users can be at risk of abuse.


  2. Set community rules

    These should meet the specific regulatory and legal requirements of the business model. They should also address the value proposition and needs and expectations of users and wider society. As outlined in the Australian Government’s Basic Online Safety Expectations regulatory guidance, companies should consider the range of harms and risks that can happen on their platform, including illegal or harmful material and activity.  

    This should include: 

    • prohibited user behaviours (such as sharing harmful content)
    • rules on how users treat others (such as bullying or harassment)
    • misuse of the platform itself (such as tools to avoid detection, learn more in our page, 'How platforms can be misused for online abuse'). 

    Keep users regularly updated about community rules through nudges, texts, prompts, video content and in-app content.


  3. Create social contracts

    Social contracts help build support among users for a service’s community guidelines. Instead of just ticking a box to accept a service’s terms of use, users are encouraged to understand their role in keeping the community safe, as well as the service’s obligations. 

    By having users agree to treat others in the community with respect and to reject racism, discrimination and hateful language, users share responsibility with the service and to create positive experiences for everyone.


  4. Provide information about how to deal with online abuse

    You can do this by creating a dedicated safety section for users. Explore eSafety’s advice around dealing with cyberbullying, adult cyber abuse and image-based abuse as a starting point.


  5. Set clear outcomes

    Detail the consequences for breaching community standards, for example, content removal, account suspensions or account removal.


  6. Set clear appeals processes

    Allow users to appeal decisions. Provide clear information about the appeals process and expected timeframes for responses.


  7. Consider accessibility

    Terms of use, policies, procedures (including making reports and appeals) and standards of conduct should be clear for all users. This is also a requirement under the Australian Government’s Basic Online Safety Expectations.


  8. Improve features and functionality based on feedback

    Community guidelines should encourage ongoing consultation and feedback from the community, including users and experts.  


  9. Provide details for support services

    This can include direct links to a safety or help centre, privacy policy and other platform policies and features. This may also include information on seeking help from local law enforcement or support services. 

    Consider making this information as localised as possible and appropriate for the needs of different users, including children, young people, parents and carers, schools and educators.

Case study: Using the language of the game

The Thriving in Game Group’s (previously the Fair Play Alliance) framework for creating and maintaining community guidelines for online games and platforms highlights two innovative examples of the gaming community incorporating the language of the game into their community standards:

  • Sea of Thieves for Xbox One refers players to its ‘Pirate Code’.
  • League of Legends by Riot Games showcases a ‘Summoner’s Code’ as part of its onboarding process.

More examples of community guidelines

Many companies have easy-to-find community guidelines, often published both on their website and app. See for example:

How to motivate online community moderators

Services can encourage users to become community moderators by rewarding positive behaviours and including user-friendly safety features into product design. 

These experiences can help build digital skills and a sense of community, particularly for young people.

Click or tap on the + to find more about how you can motivate online community moderators.

  • Reward users who uphold community standards with special permissions, unique roles or access to features such as limited-time camera effects.
  • Create community moderator programs to give additional status to people or organisations that assist with removing inappropriate content.
  • Recognise passionate and positive users by introducing contributor programs that can offer additional benefits or perks, such as exclusive events and access to new features and priority support.
  • Use point systems so users can vote on each other’s comments and posts – having more votes or points suggests a user’s content is of a higher quality.
  • Develop expert and leadership programs by selecting community members to support other users, promoting positive behaviour.

  • Include a pledge during sign-up to promote inclusion and encourage a sense of shared responsibility.
  • Promote activities that spread awareness of community issues such as anti-bullying campaigns or initiatives to celebrate friendship.
  • Enable group chat owners to use moderation strategies and techniques that encourage positive online behaviours.
  • Create a more positive and less competitive online environment by choosing not to display the number of likes on a post.
     

  • Offer upgrades or game experience to reward good behaviour or positive play (such as promoting the user through levels, activating bonus features or adding extra points).
  • Award community badges such as a friendship badge for users with, for example, 20+ friends.
  • Allow community moderators to define their own rewards to show appreciation to members.
  • Let content creators use channel-specific points systems to define special rewards for their audience.
  • Enable server administrators and moderators to reward users with additional permissions.
  • Allow users to reward other users with tokens or items they have purchased or earned or with custom badges (like kindness badges).

Real-time prompts for potential violations

Platforms are now using AI-facilitated and human-moderated prompts in response to content that may be in breach of community standards – before it is even posted. 

Language-based prompts can flag potentially abusive or harmful comments. Content-based prompts can detect and warn users against uploading, saving or sharing harmful, illegal or sensitive information (such as nudity, personal information or other problematic material).

These interventions help users practise kindness and deter them from posting offensive or hurtful content or comments.

For example:

  • Tinder’s ‘Are You Sure?’ prompt is aimed at reducing harassing in-app messages. It warns the user that their message may be offensive, asking them to pause and consider the message before sending it.
  • TikTok and Instagram both incorporate a prompt feature when a potentially harmful comment is detected. The prompts remind users about their community guidelines and ask them to consider editing their comment before it is posted.

Default settings

The first time someone uses a platform or signs up to a service, it will operate based on the default settings. Users may not know that they can change these settings to better protect their safety.

Designers and software developers can choose to prioritise safety by setting the highest possible safety level as the default. They can also create different default settings based on the age of the user. 

Setting defaults to the highest possible safety level is a safer option for all users. This means having settings that share profile information or location data switched to private mode or switched off. It would also limit access to device hardware, such as cameras and microphones, and ensure that photos, friend lists and chat functions were only accessible to approved contacts. 

When a user changes their settings 

Explanations and prompts should be provided in plain language to all users when they attempt to change their settings, so that the implications of any changes being made are fully understood. If software updates are introduced, default settings should be reverted to the highest possible safety levels. 

Settings for children 

Different services, functions and features will pose different safety risks to children of varied ages and capabilities. Services should consider and identify potential risks early and take appropriate steps to make sure children can use the functions and features of a service safely. 

If services are likely to be accessed by children, the privacy and safety settings should default to the most restrictive level. This is a regulatory requirement under the Australian Government’s Basic Online Safety Expectations. Options should also be provided to allow children and young people to change settings for a single post or interaction, rather than be permanent. Settings should then revert back to the safest option by default. 

Providers can also consider the Institute of Electric and Electronics Engineers (IEEE) Standard for an Age Appropriate Digital Services Empowerment Framework Based on the 5Rights Principles for Children

There are also private-by-default requirements in Australia’s social media services industry code, and the relevant electronic services industry standard for children.

Social media changes are coming

From 10 December 2025, certain social media platforms won’t be allowed to let Australian children under 16 create or keep an account. Find out more at eSafety’s social media age restrictions hub.

Good practice for default settings 

A range of technical tools can be built into platforms or services to help users manage their online experience. This includes tools for parents and carers to manage their child’s online safety.

Click or tap on the + to find more information about technical tools that can improve user safety.

Technical tools that can help users manage their online experience and facilitate age-appropriate access to content include:

  • age assurance, which could include age verification
  • parental companion apps and/or controls (including to manage screen time)
  • filtering options for different content types
  • safety alerts or sensitive content warnings, including blurring sensitive or adult content by default for all users
  • de-listing or de-prioritising content or accounts. This can be done to varying degrees for different user groups, for example, applied to children’s accounts but not adult accounts
  • hiding different types of content
  • quarantining content for later review
  • opt-in and opt-out measures regarding the types of content that end-users are recommended or can receive
  • muting keywords.

Technical tools to allow users to manage who they are interacting with and how include:

  • blocking
  • muting
  • account or user-based filters
  • quarantining content from certain contacts
  • hiding content or accounts from specific accounts
  • deleting or removing accounts or users from contact lists
  • offline mode
  • ability to use different avatars or pseudonyms
  • ability to restrict who can direct message users.

Technical tools that allow users to manage their environment include giving users the ability to:

  • turn off specific features or functionality
  • customise privacy settings
  • create private teams, communities or groups
  • limit content to user-defined groups
  • choose which content to limit on their feed
  • ensuring that landing pages or first point of contact with a service does not contain sensitive or adult content and that this material is behind an age-gate.

Technical tools that allow users to:

  • submit age-rating by uploaders and viewers
  • decide whether to view content based on labelling.

Services can decide whether the uploader, viewer or both can submit a rating for content being shared across its platform. Platforms can customise how content ratings appear on their services, and whether age-rating user-generated content is encouraged at point of upload.

While users should have the freedom to express themselves and participate fully in the opportunities that the online world offers, they should be required to follow positive social norms promoted within the environment. Multiple technical tools and features have been developed that can filter content – not necessarily for removal (except when the content is illegal), but to place it behind age-gates or warning pages so that users can access the content by exercising informed consent. 

Encouraging users to rate content and empowering and equipping them with the technical tools to help protect themselves, enhances both the user experience and the service. 

Reporting mechanisms

Easy-to-use and accessible reporting mechanisms are an essential way of keeping users safe. By providing good reporting tools, service providers can take preventative steps to ensure their service is less likely to facilitate or encourage illegal and inappropriate behaviours.

Service providers have a responsibility to put in place infrastructure that supports internal and external triaging and clear escalation paths. 

They should provide reporting on all user safety concerns, along with readily accessible mechanisms for users to report concerns and violations at the point that they occur. This should include prioritising reports for escalation and rapid response, such as material or activity that presents a serious and immediate threat to life, health and safety. This is a requirement under section 13 of the Basic Online Safety Expectations.

By continually validating, refining and improving reporting and complaint functions, platforms and services can ensure reporting tools meet the evolving needs of their users.

Read more about enforcement mechanisms, including examples of detection tools, and how you can help protect your users from harmful content.

Advice for designing reporting functions

New and existing reporting measures should be tested across a diverse range of user groups, including ‘edge cases’, so you can ensure an inclusive online platform for everyone. 

Users can be put off from reporting when reporting functions: 

  • are difficult to find
  • are not built-in, intuitive to use, or available at the point of need
  • require personal information to be provided
  • require creating and logging into an account
  • use pre-determined reporting fields that are non-specific or ambiguous
  • do not capture their reporting needs or concerns
  • lead to social reprisal (for example, if their identity can be inferred through contextual information)
  • reporting processes require re-confronting the material being reported (such as requiring screenshots or report reviews).

Use the following questions as a starting point to help you when designing or updating the reporting tools on your platform:

Users should be able to report accounts, content, activities and features. Having extensive options for what can be reported can improve both the insights into what online harms happen on the platform and how to respond to an incident. 

For example, including the option to report content (including messages) and contact that make the user ‘feel uncomfortable’, which may or may not be in breach of specific policies.

Streamline reporting advice and functions to make it easier for the users to report. Think about using videos, images or screenshots of the reporting process to help users of different literacy levels to understand how to make a report. 

Children, young people and at-risk groups need to be able to report in a way that is simple and quick for them. Assess the need for mandatory personally identifiable information fields in content reporting forms.

Reporting functions should be highly visible, easy to locate and simple to navigate. These functions should have consistent navigation and tools across all devices and access channels (such as in-app, in-chat, in-video or on websites), so that the reporting process is seamless for all users. 

Communicate with users through in-platform tools and reminders to prompt users that they can report and include direct pathways to report. Many of Australia’s industry codes and standards include specifications on the location of reporting mechanisms.

Provide users with opportunities to communicate with free text and not limit reporting to pre-determined response options alone. 

Open-ended reporting can allow for more authentic reports, which both benefits the user and can provide feedback to inform the design of reporting functions. 

It can also enable users to report about material they know about but may not have direct access to (for example, an intimate image they know has been shared on a service, but they don’t know where and who has access to it).

Allow the ability to report without requiring people to create or log into an account. This is important where material or activity may be impacting a person who is not a user of the service – for example, cyberbullying material that is being shared about a person who does not have an account on that platform. 

This is a requirement under section 13 of the Basic Online Safety Expectations.

Users may be cautious to report if they fear judgement and retaliation within their social circle. Even when reporting is ‘anonymous’, others may still be able to guess who has made the report through contextual information, for example, the suspension of someone’s account.

Give all users access to clear and simple appeal processes and include opportunities to provide context in their submissions. 

It is essential to create dedicated internal channels for reviewing reports and considering appeals. Implement these from the very start and update regularly.

Ensure users receive updates and information about the reports they have made. Feedback loops should be continually monitored and evaluated so they are fit for purpose. 

Include an expected timeframe for when users can receive a response from the service about their complaint.

Include contact information for law enforcement, hotlines, regulatory bodies or other relevant authorities, as well as referrals to mental health and community services, for users before and after making a report. Wherever possible, these support services should also be localised.

Public information on safety policies, processes and tools

Click or tap on the + to read more about different types of online safety information you can provide to your users.

Online safety centres let services openly communicate their key safety and privacy policies, processes and features. Some larger companies with online safety centres include Meta, Google, and TikTok

As a design feature, online safety centres provide important guidance for users seeking out advice and resources when they interact with each other online. This is why online safety centres should always be prominent and easy to find on your service. 

These centres empower users to manage their own online safety. They should also address the needs of different users, such as children, parents and at-risk groups and provide links to relevant support and law enforcement services.

To remain effective, safety centres must be regularly updated to ensure they address current online harms and abuse tactics. 

Giving users confidence 

Safety centres give users greater confidence in a platform’s ability to keep them safe. They also provide an opportunity to promote key features and policies, such as:

  • security measures like two-factor authentication or parental PINs
  • privacy features and settings
  • safety features and settings
  • community guidelines or terms of service
  • additional protection programs for at-risk users. 

Explaining how these features work and who is notified when certain actions (like blocking or unfriending) are taken, helps to build transparency and trust. This also emphasises safety as a shared responsibility among users, moderators and industry experts.

Demonstrating leadership in online safety

Safety centres can also promote a platform’s leadership in online safety, whether through individual activities or as part of a global alliance. This might involve:

  • publishing reports showing how an organisation or alliance has reduced online harm or dealt with a complex incident
  • providing links to industry organisations or initiatives that promote online safety.

Read more about the impact of global alliances and how you can join one that is most relevant for your organisation.

Addressing the needs of different users 

Publishing accessible and interactive safety content allows platforms to communicate with different audiences – including children, parents and carers and at-risk groups – to make safety a cornerstone of their user experience. Find more information in the Improving accessibility section on this page.

Provide community guidelines that explicitly cover your service’s expectations. The Australian Government’s Basic Online Safety Expectations also outline expectations relating to policies and terms or service.

  • Provide step-by-step advice on how to report harmful and illegal content, conduct and contact on your service.
  • Use videos, images or screenshots to help users of different literacy levels understand these processes.
  • Ensure your reporting and complaints process states the expected timeframe for a response from the service.
  • Provide timely feedback to users about the status or outcome of their report.

The Australian Government’s Basic Online Safety Expectations outlines expectations relating to reports and complaints, and there are a range of compliance measures around user reporting in the industry codes and standards. 

Read more about the importance of using human-centred design as part of your reporting process, as well as practical tips you can implement on your platform.

  • Outline the process to appeal your service’s decision in relation to:
    • complaints
    • blocking
    • removal of content
    • access restriction (permanent, indefinite or temporary)
    • account deletion
    • why content was not removed.
  • Use videos, images or screenshots to help users of different literacy levels understand these processes.
  • Ensure your appeals process states the expected timeframe for a response from the service.
  • Provide timely feedback to users about the status or outcome of their appeal.

Standard safety settings and features

Outline the safety features that are set as default when signing up to a service or using it for the first time, including whether these are set according to the age of the user.

Explain if automated systems, artificial intelligence, natural language processing or machine learning are used to flag or respond to safety issues. Also specify the harms those tools apply to and the extent to which those tools are deployed on the service (for example, only in public parts of the service).

Opting in and out of safety features and tools

Outline any opt-in safety tools or features, including:

  • what they are designed to protect against
  • how to turn them on and off.
     

Use videos, images or screenshots to help users of different literacy levels understand these processes.

Provide links to external support services and safety partners in a dedicated section of the platform, or in a safety centre. To make this information effective: 

  • break support services down by regional location
  • ensure links to relevant supports services are provided for children and young people and other at-risk groups
  • provide contact details to assist local law enforcement agencies with their enquiries, especially at the point of reporting.

Advice and support should also be provided:

  • throughout the in-app experience
  • at the point of need, such as when search terms or activity associated with harm is identified while using the app
  • at the point of reporting.

Improving accessibility

Accessible design means making content and resources more inclusive for a broad range of users, particularly those at greater risk of online harm due to socio-economic factors or belonging to at-risk groups. Platforms should prioritise products, programs and resources to support, protect and empower these users.

At-risk groups include:

  • children and young people
  • older people
  • women
  • people with disability
  • indigenous peoples (for example, Aboriginal and Torres Strait Islander peoples)
  • people from culturally and linguistically diverse communities
  • LGBTIQ+ people. 

Click or tap on the + for tips on making your safety content accessible.

Publishing accessible and interactive safety content helps platforms to engage diverse audiences – including children, young people, parents, carers and at-risk groups – to make safety central to their user experience.

Tips for inclusive safety content:

  • Use inclusive and plain language to ensure the needs of different at-risk groups – such as indigenous peoples, LGBTIQ+ people and people from culturally and linguistically diverse communities.
  • Label everything clearly and use visuals to assist with understanding.
  • Integrate accessibility tools and assistive technologies (for example, captions on videos and alt text on images).
  • For parents and carers, provide information and tips on safety features and how to talk with their children and young people about online safety. Design this content for shared learning and encourage open conversations.
  • For children and young people, make information appealing and developmentally appropriate (for example, using cartoons, videos or gamified content).

Accessible design improves equal access to information for users with a broad range of capabilities and literacy levels. It also takes account of the higher level of online abuse faced by people from at-risk groups and ensures they have access to inclusive online safety information and reporting functions. 

You should build in accessibility from the start of your user experience and user interface design, informed by diverse user research and testing. It is less effective to try to add accessibility features as an afterthought. You should also get to know your users and respond to their preferences and needs through continual feedback.

There are a variety of tools and resources that can help platforms and services integrate accessibility into their design from the outset.

Important accessibility features to consider:

  • alt text, captions and audio descriptions
  • appropriate colour contrasts between the foreground and background elements on a page
  • compatibility with assistive technologies such as text-to-speech screen reading software
  • simple, inclusive language.

Standards like WCAG (web content accessibility guidelines) to follow

The World Wide Web Consortium (W3C) develops international standards for the web, including guidelines for accessibility. Its Web Content Accessibility Guidelines (WCAG) cover a wide range of recommendations for making web content more accessible across all devices.

Following the guidelines will make content more accessible for people with disability, including people with:

  • vision, hearing, mobility or speech impairments
  • photosensitivity
  • intellectual and developmental disabilities. 

The guidelines will not address every user need for people with these disabilities, but it provides a single shared standard for content authors, developers and designers around the world. Following the guidelines will also make your content more accessible to users in general.

Name Type Description
AXE Tool A digital accessibility testing toolkit that uses the WCAG as a measure, includes monitoring, auditing and development tools. 
AChecker Tool An evaluation tool for web accessibility that tests against WCAG, Section 508, Stanca Act and other accessibility standards.
Coblis –
Color Blindness Simulator
 
Tool A tool to simulate how an image looks to viewers with different types of colour blindness.
Contrast Checker Tool Compares the foreground and background colours of a website and assesses it for accessibility against WCAG.
The World Wide Web Consortium Web Content Accessibility Guidelines (WCAG) Tool Covers a wide range of recommendations for making web content more accessible across all devices.
IBM Text to Speech API Converts written text to natural-sounding speech.
Natural Language API Uses machine learning to analyse text, extracting key insights. Can be used alongside speech-to-text API to analyse audio.
One Click Accessibility API Adds accessibility toggles and features to WordPress sites with minimal setup.

To build a safer and healthier online community, Electronic Arts developed the Positive Play Charter, an updated set of community guidelines presented in an easily digestible format. The charter distils a range of complicated rules into clear dos and don’ts, categorised into four broad categories:

  • ‘Treat others as they would like to be treated’
  • ‘Keep Things Fair’
  • ‘Share Clean Content’
  • ‘Follow Local Laws’.

Learn more about how transparency and consultation can support accessibility.

Pan European Game Information uses icons to provide both content descriptions and age classifications for games.

  • Age labels display the minimum age suitable for a piece of content. The labels are also graded into the global visual language of traffic light colours: 3 and 7 are green, 12 and 16 are orange and 18 is red. The label also includes the URL of the Pan European Game Information website, which gives more information on how age classifications are decided.
  • Content descriptor icons indicate if the game contains violence, bad language, fear, gambling, sex, drugs, discrimination, and in-game purchases.

Learn more about how transparency and consultation can support accessibility.

Twitch created Core UI Ultraviolet, a more accessible, modern and unified design system. This addressed both visual and structural accessibility issues by:

  • improving the information architecture for easier navigation for users, including those who use screen readers, allowing more intuitive access to safety features
  • implementing AA contrast ratio in both light and dark themes (this is to ensure readability for people with vision impairments. Use the WebAIM contrast checker to see if you comply)
  • updating font size, element spacing and other variables for improved clarity.

Twitch reinforced its commitment to accessibility by releasing an accessibility statement, publicly promising to reduce accessibility errors, provide accessibility documentation, establish a group for employees with disability, and inviting feedback from creators with disability.

Learn more about how transparency and consultation can support accessibility.

More Safety by Design foundations