Need help dealing with violent or distressing online content? Learn more

Principles and background

At the heart of Safety by Design are three principles that provide platforms and services with guidance as they incorporate, assess and enhance user safety.

These principles outline realistic, actionable and achievable measures that providers of all sizes and stages of maturity can use to safeguard users from online risks and harms. They are built around a human-centric approach that places the safety and rights of users at its core, while also taking into account their needs and expectations. The principles elevate user safety as the third pillar in the developmental process for all online and digital technologies, sitting alongside privacy and security.

The principles also promote the technology industry’s strengths in innovation, encouraging new thinking and investment that supports product development which prioritises online safety.

The technology industry has a key role to play in ensuring these principles are adopted and their implementation is led from the top.

Safety by Design principles

1. Service provider responsibility

The burden of safety should never fall solely upon the user. Every attempt must be made to ensure that online harms are understood, assessed and addressed in the design and provision of online platforms and services.

This involves assessing the potential risks of online interactions upfront and taking active steps to engineer out potential misuse, reducing people’s exposure to harms.

To help ensure that known and anticipated harms have been evaluated in the design and provision of an online platform or service, the following steps should be taken:

  1. Nominate individuals or teams and make them accountable for user safety policy creation, evaluation, implementation and operations.
  2. Develop community guidelines, terms of service and moderation procedures that are fairly and consistently implemented.
  3. Put in place infrastructure that supports internal and external triaging, clear escalation pathways and reporting on all user safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point they occur.
  4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.
  5. Put processes in place to detect, surface, flag and remove illegal and harmful behaviour, contact and content with the aim of preventing harms before they occur.
  6. Prepare documented risk management and impact assessments to assess and remediate any potential online harms that could be enabled or facilitated by the product or service.
  7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users.
  8. Consider security by design, privacy by design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity, and availability of personal data and information.


2. User empowerment and autonomy

The dignity of users is of central importance. Products and services should align with the best interests of users.

This principle speaks to the dignity of users, and the need to design features and functionality that preserve fundamental consumer and human rights. This means understanding that abuse can be intersectional, impacting on a user in multiple ways for multiple reasons, and that technology can deepen societal inequalities. To combat this, platforms and services need to engage in meaningful consultation with diverse and at-risk groups, to ensure their features and functions are accessible to all.

To help ensure that features, functionality and an inclusive design approach give users a level of empowerment and autonomy that supports safe online interactions, the following steps should be taken:

  1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.
  2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the users.
  3. Leverage the use of technical features to mitigate risks and harms, which can be flagged to users at relevant points in the service, and which prompt and optimise safer interactions.
  4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal.
  5. Evaluate all design and function features to ensure that risk factors for all users – particularly for those with distinct characteristics and capabilities –have been mitigated before products or features are released to the public.


3. Transparency and accountability

Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that platforms and services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns.

The publication of information relating to how companies are enforcing their own policies and data on the efficacy of safety features or innovations will allow accurate assessment of what is working. If interventions are improving safety outcomes for users or deterring online abuse, these innovations should be shared and more widely adopted.

To enhance user trust, awareness and understanding of the importance of user safety, platforms and services should:

  1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service. 
  2. Ensure that user safety policies, terms and conditions, community guidelines and processes about user safety are accessible, easy to find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.
  3. Carry out open engagement with a wide userbase, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.
  4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community guidelines and terms of service are being satisfied through enforcement metrics.
  5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.


Consultation and research

Research and consultation on the Safety by Design principles began in 2018. To position user safety as a fundamental design consideration, we engaged in in-depth consultation with large technology companies and early stage or start-up companies.

Beyond industry, these principles are also designed to reflect the needs of other participants in the technology ecosystem. So a range of people and organisations were also consulted, such as NGOs, advocates, parents and young people. This process of consultation informed the Safety by Design vision for young people.

Vision for young people

Alongside the development of the principles, young people were asked to prepare a vision statement. This lays out what they want in terms of online safety and how they expect the technology industry to help users navigate online environments freely and safely.

Their collective vision statement prioritises the following areas:

  1. Empowering users by giving them greater control of their own safety and experiences online.
  2. Providing clear rules and guidance that are easy to read and highly visible.
  3. Providing users with safety tools and features, namely ways to make reports and to block both people and content.
  4. Imposing sanctions and consequences for violating the rules of the site.
  5. Using scanning and filtering technology to ensure user safety is upheld on the site and users are not exposed to inappropriate or sensitive content.

More information

For more information, please read about Safety by Design initiatives.