Embedding safety in internal policies and procedures
Online harms should be identified and addressed at every stage of service design and delivery, including through regular reviews and updates.
This section explains how safety can be embedded in internal policies, processes and procedures that help to keep users safe.
On this page:
Human rights standards and children’s rights
Respect for human rights is a central measure of corporate social responsibility. As you develop your services and draft your policies, you should consider:
- The Universal Declaration of Human Rights
- International Covenants on Economic, Social and Cultural Rights
- UN conventions and resolutions: Rights of the Child, Cybercrime, General Comment on children’s digital rights.
A range of guidelines have also been developed to help companies make the principles relevant to their operations:
- UN Guiding Principles on Business and Human Rights
- Children’s Rights and Business Principles
- Child Online Safety Universal Declaration
- Online Gaming and Children’s Rights: Recommendations for the Online Gaming Industry
- ITU Guidelines for industry on Child Online Protection.
Children’s rights
Integrating children’s rights into core strategies and operations can enhance your reputation, improve risk management and strengthen existing corporate initiatives. This means both preventing harm and actively safeguarding children’s interests.
To do this, you must identify risks and opportunities that impact children and young people on online platforms and services and ensure these are addressed in internal policies and procedure.
A key resource to help classify risks to children and young people is the 4Cs classification:
- Content – the child engages with or is exposed to potentially harmful content.
- Contact – the child experiences or is targeted by potentially harmful contact.
- Conduct – the child witnesses, participates in or is a victim of potentially harmful conduct.
- Contract – the child is party to or exploited by potentially harmful contract or commercial interests.
Designing a safety review process
Embedding safety considerations in product design and development helps to prevent and reduce online harms. Integrate formal safety reviews – including consultation and testing – into the design process from the beginning, and through the product, platform, or feature lifecycle. Reviews should include employees from teams across the organisation who are responsible for online safety, including the executive team.
If you are developing a generative artificial intelligence (AI) product, eSafety has tailored guidance in our Generative AI position statement. Minimum compliance measures may also apply under industry codes or standards.
Companies can embed scenario testing on a broad range of known, unusual and edge case behaviours. This testing should also consider national, international and industry standards. The types of online harms and the techniques and tactics abusers use will play an important role in scenario testing and are covered extensively in our page, How online platforms can be misused for abuse.
Children’s safety and rights should be a core focus in any safety reviews and risk and impact assessments. As children are still developing, they require special care and support. Reviews should balance protection with respect for their growing autonomy and participation. See the Safety by Design youth vision statement for more detail.
Routine training and education on formal safety reviews with relevant employees, including contractors, is essential to mitigate or prevent online harms before they occur.
Safety review processes should be ongoing and updated regularly to reflect constantly changing risks and technologies.
Some factors to consider when developing a safety review process include:
- scenario testing for known risks, harms and abuse techniques
- simulating adversary conditions to improve defences
- new forms or techniques of abuse
- assessment of false positives or negatives (in moderation processes or reported abuse)
- user behaviours, needs and impacts for at-risk groups
- health and wellbeing impacts on employees, community moderators and users
- effectiveness of support services and links to resources
- escalation pathways and feedback loops to handle user safety concerns.
The suggested time to complete safety reviews is during:
- business case development
- planning and analysing
- development
- testing and refinement
- deployment/launch
- post-launch review of all features and functions
- platform maintenance (updates or refreshes).
Some industry codes and standards require services to undertake a risk assessment before making a material change that may impact on the risk of class 1 material on the service.
Good practice for safety reviews
Good practice for safety reviews follows the testing lifecycle:
Scenario testing
Good safety review processes should include testing:
- specific edge cases, such as unusual user behaviour or incidents that require special handling
- across all channels, surfaces, features and tools
- with diverse teams, incorporating members from different genders, backgrounds, experiences and perspectives
- within specific regions and jurisdictions.
Analysis
Good safety review processes should include analysis of:
- patterns of behaviour and social network effects, focusing on abusive actors
- online signals such as metadata and traffic signals (such as notifications, reaction buttons, read receipts, active status)
- behavioural signals including patterns of interactions, such as search activity, group membership and activity, violation indicators (such as reports and connection activity or friend requests), content creation and sharing, profile and accounts
- behavioural signals for at-risk groups (such as engagement patterns, content preferences, use of reporting and content control features)
- interactions between/across different parts of the product, platform or service.
Environmental scanning
The environmental context your platform or service operates in constantly changes so safety review processes require:
- continuous assessment of new forms or techniques of abuse occurring on the platform
- researching and analysing new forms or techniques of abuse on other platforms
- information sharing cross-industry including on good practices and new safety innovations
- understanding the support requirements and resources for victims/survivors and at-risk and marginalised groups.
Assessment
Good safety review processes should assess the effectiveness, accuracy and impact of:
- automated and human moderation systems (such as removing, demoting, delaying and/or labelling of content, and suspension or banning of accounts)
- user safety controls and tools (such as muting and blocking)
- safety and civility campaigns (such as encouraging respectful behaviour and educating users on community guidelines)
- prevention interventions (such as behavioural nudges, warning labels, parental controls)
- reporting and appeal systems and processes
- disruption techniques (such as geoblocking, content throttling)
- detection tools
- automated responses
- feedback systems and processes
- the reduction of harms or risks, with a focus on at-risk or marginalised groups
- any user confusion or misunderstanding relating to how a product or feature functions.
- selected data for building AI models.
You should also assess Health and wellbeing impacts and provide the necessary support for all stakeholders involved in managing harms and risks, including:
- internal and external employees involved with addressing harms and risks
- community managers or community moderators or administrators.
Testing
Prior to launch, your product, platform or service should be tested again. This may take the form of ‘beta testing’, where a select group of external users conduct real-world testing, or ‘dogfooding’, where employees use the product or service before its released to the public.
Testing your product should help you identify and mitigate pitfalls and emergent risks that you may not have considered in your safety scenario testing, analysis or assessment.
External expertise
To ensure objectivity and fresh perspectives, safety measures, such as policies and tools, should be reviewed by independent third-party auditors. Seeking out innovations and research can also improve safety review processes.
Safety standards and frameworks
Your safety review should be informed by national and international:
- regulatory frameworks, including requirements under the Online Safety Act
- technical standards, such as those from the International Organization for Standardization.
Evaluating user risk
To help you identify and manage risk on your platforms, it is important to analyse user attributes, behaviour, content interactions and the environmental context.
Further information can be found in our Regulatory Guidance.
User risk analysis
Establish a baseline risk profile for users, especially those with connected accounts across multiple platforms, by considering:
- basic profile information: username, display name and associated URLs.
- demographic attributes: gender, age, religion, ethnicity, ability, sexual orientation and political affiliation.
Content interaction risk thresholds
Evaluate the types of content a user interacts with and assess them for potential risk based on context, frequency and sentiment. This includes:
- images
- text
- audio
- video.
Behaviour risk thresholds
Use detector presets that are linked to a library of keywords and phrases and monitored in real-time. These flag high-risk behaviours in user interactions, including:
- sexual harassment/aggression
- profanity
- nudity
- doxing
- sharing prohibited content
- ideological extremism
- threats of violence
- impersonation
- insults
- harassment
- self-harm
- defamation.
Risk detection and alert notification
Apply analytical tools, including:
- account behaviour analytics, to track changes in user activity over time
- sentiment analysis, which evaluates tone and emotional content
- linguistic patterns, to detect patterns that may indicate escalating risk.
The risk detection process involves:
- flagging prohibited activities or interactions that have a risk indicators
- quickly filtering through data to locate relevant risk thresholds
- establishing a risk score
- linking prone patterns into analysed behavioural changes over time
- establish links to other profiles with similar risk scores.
Environmental and behavioural threshold
Beyond individual behaviour, consider external and behavioural factors, such as:
- group memberships
- vocabulary and language use
- social connections and interaction strength
- environment risk profiles
- romantic relationships
- geospatial analysis, locations and movements
- page likes and follows
- interactions with forums
- influence and prominence in networks
- online signal analytics
- events interacted with
- network analysis.
How to carry out risk and impact assessments
Work on risk and impact assessments early, using specified safety criteria to help your organisation reduce online risks and harms. Ensure safety, security and privacy design features are given equal importance in terms of assessment and measurement.
Here are the steps you can take to carry out a safety risk and impact assessment:
- Map safety implications across the service, considering potential misuse scenarios and vulnerabilities.
- Gather information and evidence to understand risks and harms.
- Analyse and assess the risk of harm, accounting for diverse user groups.
- Develop strategies to prevent harms before they occur, reduce risk severity and create clear remediation pathways.
- Implement recommended actions into concrete design and operational changes.
- Document and update risk and impact registers to support transparency and continuous improvement.
- Monitor, evaluate and update to ensure your platform adapts to new threats and user needs.
- Report findings internally and externally to show your commitment to safety and compliance with regulatory or industry standards.
Safety considerations and impact assessments
Before carrying out an impact assessment, clearly outline and understand all safety considerations. This leads to decisions and recommendations based on the best interests of users, particularly at-risk and marginalised groups.
Risks and harms
Ensure you understand the typology of harms that may occur and the tactics and techniques used by perpetrators. Learn about online harms and common misuses of online tools and features.
User impacts
Assess and document all impacts on user safety – positive, negative and differential. Impacts could include:
- short, medium and longer-term impacts
- actual and potential impacts
- differential levels of impact among at-risk, vulnerable and/or marginalised groups
- impacts on the target audiences or userbase, and those who may be indirectly impacted
- how distinct or interrelated the safety impacts are, considering increases in effects and impacts
- whether the safety impacts are caused, contributed to or directly linked to the platform’s design, development or deployment.
Consequences
Assess the severity of consequences based on:
- the scope, scale and interrelatedness of impacts
- whether impacts can be remedied
- the consequences for individuals and society.
Who should participate in safety assessments
People across the organisation
Create teams with a range of people as representatives from across your organisation to carry out safety risk and impact assessments. For these teams to succeed:
- share relevant information and intelligence regularly
- use up-to-date, robust and accurate data and evidence
- provide national and international online safety principles, frameworks, codes of practice, guidance and alliances.
External stakeholders
Engage a broad range of external stakeholders, in an inclusive and meaningful manner, including:
- at-risk, vulnerable and marginalised groups, or those that can represent them
- those whose safety has already been negatively impacted, including victims
- experts and specialists
- local and regional representatives.
As part of the external engagement process, pay attention to:
- accessibility – follow best practice to ensure that representative voices are heard
- holistic involvement – involve external stakeholders throughout the process
- capacity building – build awareness and knowledge for meaningful participation.
Internal operational guidelines
Internal operational guidelines provide an opportunity to include key online safety information for employees, contractors and the leadership team, detailing how to deal with specific incidents and the procedures that should be followed.
Internal operational guidelines for employees should outline:
- how the service or platform could be misused to harm, harass and abuse others
- company policies and processes for managing harmful activity.
Leadership teams require additional guidance that covers managing employees who:
- violate the company’s acceptable use policy, platform’s terms of service or community guidelines
- don’t correctly follow the operational guidelines
- misuse company resources such as software tools and data to facilitate online harm.
Ensure guidelines are:
- comprehensive and easy to understand
- embedded into daily working practices
- supported by regular training and updates.
Guidance and information
Make the following information available to all employees:
- Corporate policies and procedures such as accountable authority instructions, risk management guides and policies covering compliance and enforcement.
- The code of conduct and its relation to user safety.
- The platform’s terms of service or community standards and guidelines.
- The types of online harms and current techniques and tactics used by abusers.
- Mitigation measures such as the technical features, advice and support available.
- What actions are taken when violations occur.
Leaders and human resources teams may require additional details on:
- role-specific responsibilities and expectations in terms of managing user safety
- access levels to internal tools and user account information based on an employee’s role
- policies on the appropriate use of resources such as systems, hardware and data
- the types of internal safety breaches that may occur, including edge-case examples.
Processes and procedures
Ensure all staff have access to clear safety governance processes and procedures, including:
- safety review cycles and teams involved
- internal and external safety intelligence sharing procedures
- reporting, complaint, appeal and escalation processes.
Leaders and human resources staff should also have access to the policies and procedures for:
- enforcement of privacy, security and safety compliance
- identifying, moderating and reporting illegal or harmful activity carried out by employees or using corporate equipment
- procedures for handling employee non-compliance or incorrect/incomplete reporting
- dispute resolution, review and appeal processes for staff misconduct.
Raising awareness of the acceptable use policy
An acceptable use policy outlines the practices and constraints a user must agree to before they use or access a platform or service. It is crucial to have training and awareness of the policy at your organisation. It should be implemented from the earliest stages possible for all employees, including contractors.
Promote the policy through:
- the hiring process and employment contracts
- induction and ongoing training
- team meetings
- intranet and email updates.
Support staff to understand the policy by:
- writing in accessible and clear language
- using case studies or examples
- providing practical video or illustrations
- requiring written confirmation from staff
- tracking or monitoring online access to the policy
- including compliance with the policy in performance assessments.
Learn more about acceptable use policies in the module Dealing with illegal and restricted online content.
More Safety by Design foundations
Last updated: 08/12/2025