Creating transparency frameworks
Transparency and accountability are key to a strong safety approach. They reassure users that you are meeting your stated safety goals and help educate and empower users to act when safety concerns arise.
This section explains the steps you can take to be more open and transparent. It gives advice on reporting measures and provides examples of transparency reporting. It also lists considerations for carrying out consultation.
On this page:
Transparency reporting
Regularly sharing updates on safety measures builds trust and increases awareness and understanding.
It is also increasingly becoming a requirement by regulators across the globe for online service providers to publicly report on safety practices and processes. Australia’s Online Safety Act can require online service providers to report on how they are meeting the Government’s Basic Online Safety Expectations. The requirement is designed to improve providers’ safety standards and improve transparency and accountability. However, you should improve transparency proactively, rather than wait for regulators to mandate it.
Australia’s industry codes and standards require you to publish certain safety policies and practices. It also puts in place annual reporting requirements on some services (either proactively, or at eSafety’s request).
To build safety into a platform holistically, it’s important to assess the full range of potential online harms that may occur and embed protections accordingly.
Transparency reporting helps communicate these efforts to an organisation’s stakeholders.
Transparency reporting measures on the enforcement of community standards and moderation of online harms must cover both content and activity. This should include meaningful analyses of metrics such as:
- abuse data and reports
- the effectiveness of detection, moderation and remediation efforts
- the extent to which community standards and terms of service are upheld and continuously improved through enforcement.
Importantly, transparency information should not be combined across multiple services provided by the same company. Each service may have distinct policies, tools and processes.
See for example:
To find out more about different types of online safety information you can provide to your users, see Empowering users to stay safe online.
Identifying and defining online harms
Make sure that users can report online harms so you can identify and measure the types of harms on your service.
You can use the World Economic Forum’s Typology of Online Harms as a starting point:
- Threats to personal and community safety
- Harm to health and wellbeing
- Hate and discrimination
- Violation of dignity
- Invasion of privacy
- Deception and manipulation.
Find more information about online harms and how you can create a safer service in our page, How online platforms can be misused for abuse.
Using data, information and context
Think of ways you can include data, information and context in your transparency reports about actions your organisation takes, including how you deal with certain activities that happen on your platform.
Click or tap on the + for prompts you can use to include data, information and context when reporting about your organisation’s decisions or activities.
Illegal and harmful content and activity
- How prevalent is it on the platform?
- Which tools, methods and systems are used to detect, moderate and take action on different parts of the service?
- What are the rates of accuracy and effectiveness in identifying, moderating and taking action on different types of content and activity? You can include accuracy against known and new content and activity, when it occurs in different regions, languages and cultural contexts, as well as repeat offending.
- What actions, remedies and processes address illegal and harmful content and activity? You can include monitoring and restricting uploads and removals, or warnings and suspensions. It also covers the restrictions placed on accounts, users, groups, items or types of content.
- What methods ensure illegal and harmful content is not promoted, recommended or amplified?
- How do you audit, review and evaluate systems? What are the processes and actions taken to identify, reduce risk and address content and activity? You can include reports from independent third parties.
Organisational responses
- What type of incident response data can you use? Include the impact of the incident – both online and in the physical world – and the effectiveness of the response. Also include any changes made to policies, procedures or processes following an incident.
- What is the number and nature of government notices and requests and the subsequent responses from the organisation?
- How do you evaluate the impact of safety measures and sentiment analysis of user safety, including perceptions of safety? Include any specific impacts for at-risk or marginalised groups.
- Do you have data and information on complaints and appeals, and their conclusions or results? Include data on restored content or accounts and any systemic changes that are made as a result of appeal outcomes.
Investment, innovation and third-party engagements
- Do you have data and information on new safety investments, innovations or processes and their impact and effectiveness?
- Do you have data and information on consultations with external experts and users on the impact and effectiveness of actions, remedies and processes (or lack thereof)?
- How is your organisation cooperating with other entities, such as industry players or alliances, governments, global initiatives, advocacy and non-government organisations, law enforcement and civil society organisations?
- What does the leadership and governance of safety look like in the organisation? Include any changes or updates.
Generative artificial intelligence (AI)
- In relation to generative AI, include processes such as model cards, system cards and value alignment cards which document the capabilities, limitations, intended uses and prohibitive uses of a capability.
Educational measures
- What methods are used to educate users on terms of service and how to keep themselves safe online?
- Do you have data and information on the effectiveness of educational measures?
Special consideration for children and young people
Transparency reporting should detail age and user demographics (in line with privacy laws about data collection). It should also include information and statistics on moderation, escalation and enforcement practices as they have been applied to children, particularly in the context of:
- non-consensual sharing of intimate images
- child sexual abuse material
- child sexual exploitation, including sextortion
- cyberbullying
- self-harm and eating disorders
- any other inappropriate content, such as content that has been flagged as sexualising a child and subsequently removed.
It should also include the risks associated with mixing different age groups in the same online spaces.
Find more information about moderation and enforcement tools in our page about how your organisation can deal with illegal and restricted online content.
More ways to enhance transparency efforts
- Use global human rights instruments and standards.
- Consult with external experts and advisors to encourage meaningful discussions.
- Seek independent evaluation using real data and information to uncover details such as mistakes and biases.
- Account for a diverse range of information sources.
- Create a global dialogue with users in a way that considers local nuances and values.
- Ensure data sets provide meaningful insights on any impacts to users.
- Incorporate expert human review.
- Understand and report on a broader cross section of harms – and the intersection of harms – including all seven outlined in the Safety by Design categories. See How online platforms can be misused for abuse.
External oversight and advice
To enhance user trust and awareness of online safety, you should engage openly with a wide user base, including experts and key stakeholders. This consultation should focus on how safety standards are developed, interpreted and applied and if they are effective and appropriate.
Consultation with independent experts, external stakeholders and your community of users fosters transparency and accountability. It also allows for expert and community input on best practices in user safety throughout the development, implementation and evaluation of safety standards. This encourages continuous innovation and improvement.
You should also create forums for people outside of your organisation to raise concerns about human rights, ethics or safety, such as through dedicated reporting channels.
Consultation can take place in different ways, including direct engagement, an advisory council or an oversight board. How you implement an independent review and evaluation process depends on your platform or service offering and the areas you wish to build trust and transparency.
Safety advisory councils
Many companies have established safety advisory councils that are made up of external experts. Advisory councils may give advice on topics including:
- drafting new policies and policy updates
- developing products and features to improve safety and moderation
- promoting healthy streaming and work-life balance habits
- protecting the interests of marginalised groups
- identifying emerging trends that could impact user experiences.
See for example:
Click or tap on the + to read more about the Meta Oversight Board.
Example: Meta Oversight Board
Meta’s Oversight Board is an independent group that makes final decisions about content on Facebook, Instagram and Threads. It provides judgement on individual cases and issues recommendations on the company’s content policies.
The Board is made up of global experts with experience in international human rights, journalism, online safety, digital rights and freedom of expression. It reviews cases where users or Meta challenge content decisions and has the discretion to choose which ones to examine. Once selected, the Board has 90 days to decide.
The Board can also provide advice to Meta regarding its content policies. Meta is not required to adopt the board’s advice, but it has committed to providing a public response to each recommendation.
The Board operates under a public charter and by-laws and is funded by an independent trust. Its decisions are based on Meta’s policies and with consideration of international human rights law.
The Board’s decisions and rationale are published in detail on its website, increasing transparency. For example, the Board overturned Meta’s original decision to leave up content targeting Indigenous Australians. It ruled that the content breached Meta’s hate speech policy. As a result, Meta reversed its decision and removed the posts.
Consulting with stakeholders
Open engagement with a wide range of users, including experts and key stakeholders, can help you develop, interpret and apply safety standards. Transparent consultation practices help to keep organisations accountable to their users.
Consulting with children and young people
Children’s safety and rights should be a priority for all leadership teams, and an important consideration for developing, applying and evaluating a company’s safety policies and standards.
The UN General Comment on the Rights of the Child recognises that the best interests of children need to be considered when making decisions on the provision, regulation, design and management of the digital environment.
Consultation with children and young people is an important step in this process. It allows children and young people to provide input into the products or services they use.
You can also go further by co-creating with children and young people, which involves deep engagement during all stages of the design and decision-making process. This can help to ensure safety policies and standards genuinely reflect children and young people’s lived experiences and needs.
When planning consultations and co-creations with children and young people, it is important to include ethical considerations to avoid any harm through participation.
To help you get started, eSafety consulted with young people to develop a vision for young people and lays out what they want in terms of online safety and how they expect the technology industry to help users navigate online environments freely and safely. eSafety also partnered with the Young and Resilient Research Centre at Western Sydney University on research that identified best practice in youth engagement and insight into how young people can be supported to be safer online. The six best practice principles from the research are:
- diverse and inclusive
- youth-led and supportive
- action-oriented
- collaborative
- rewarding
- fun and engaging.
Learn about technical tools that can help manage children’s safety online.
Consulting with diverse groups
Consulting with a diverse range of users, including marginalised or minority groups, helps ensure that safety policies and standards meet the varied needs of users.
Leadership teams should prioritise the rights of all users, especially those at greater risk of online harm or who face barriers to protecting themselves from harm or accessing support. Their needs should be considered throughout the development, application and evaluation of safety policies and standards.
Ongoing, co-designed and culturally safe consultation with a wide range of stakeholders is essential. When planning consultations with diverse groups, consider the specific ethics of engagement for each group, to avoid causing any harm. This includes embedding cultural safety considerations within the consultation process, which is key to building respectful and trusting relationships.
‘Cultural safety’ in a co-design process involves various personal components (like people’s mindsets and behaviours) and organisational components such as equal representation. These need to be understood before the co-design process begins.
To help you get you started, eSafety has published research on the experiences of young LGBTIQ+ people, young people with disability, and Aboriginal and Torres Strait Islander children.
Find more information about the importance of human-centred design in our page, How online platforms can be misused for abuse.
More advice and resources
Explore more advice and resources around transparency and accountability from a range of organisations, including:
- Access Now: Transparency Reporting Index
- Centre for Inclusive Design: The Benefit of Designing for Everyone
- Open Technology Institute: Transparency Reporting Toolkit: Content Takedown Reporting
- Santa Clara Principles on Transparency and Accountability Measures in Content Moderation
- TTC Labs: How to design with trust, transparency and control for young people
- Organisation for Economic Co-operation and Development (OECD): Transparency reporting on terrorist and violent extremist content online.
More Safety by Design foundations
Continue to Building an online safety culture to explore how leadership, employee training and global collaboration build a strong safety culture.
Or, explore these other modules:
Last updated: 08/12/2025