How platforms can be misused for online abuse
Understanding how online tools and features can be misused for abuse can help you better design your online platform or service with the safety of your users in mind.
This section explains how different tools and features can be misused to cause harm on online platforms and services, as well as who is most at risk of experiencing online harm.
It also features advice on how to embed safety during the design of your online products, services and environments.
On this page:
Common misuses of online tools and features
Devices, apps, platforms and services have functions that help users connect with others – however, these tools that keep us connected can also be misused for abuse.
That’s why taking a Safety by Design approach can help put user safety and rights at the centre of the design and development of online products and services.
Click or tap on the + to find more information about how different online tools and features can be misused for abuse.
Misuses of communication and in-app features
User-generated content: Abusive users can create posts, live streams, artificial intelligence (AI), avatars and more to harass, intimidate and humiliate victims. For example, by generating deepfake images or encouraging people to participate in live streams and other virtual activities as a means of abuse.
Communication tools: Comments, direct messages and groups chats can become channels for abuse. Sending unwanted nude or sexual content is a form of sexual harassment that can occur via in-app messages, texts, email or device features such as AirDrop or Nearby Share.
Volumetric attacks: This can include dog-piling, pile-ons and brigading. These are high-volume attacks on a person or a group that can sometimes be coordinated and happen across platforms. For example, mass commenting on a person’s post or mass reporting of a person’s post or account to try in a malicious attempt to get it taken down without good cause.
Algospeak: This involves changing language to avoid content moderation, usually by using emojis, code words and euphemisms instead.
Learn more about the risks and benefits of different online tools and features.
Methods to avoid detection using account types
Anonymous and pseudonymous accounts: Using a different name or avatar can help to protect personal information and allow free expression, but it can also shield people from accountability. Some create fake accounts specifically to target and abuse others without revealing their identity. Learn more about anonymous communication.
Account manipulation: Abusive users may create multiple accounts, temporary profiles or fake/imposter accounts – sometimes in the victim’s name to cause harm. This makes it more difficult for the targeted user to report an account and for platforms to take action against account holders. After engaging in harmful behaviour, abusive users can quickly change their account, username or profile information, or ‘unmatch’ or ‘unfriend’ the person they’ve targeted to avoid detection.
Private and encrypted spaces: Harmful or abusive content and behaviour can be hidden in private groups or moved from public platforms to private channels, including those with end-to-end encryption (E2EE). Switching between platforms – especially those with weaker moderation – helps avoid detection.
Adapting to changing policies, technologies and detection and moderation systems: Abusive users are known to slightly adapt or tweak text, hashtags and other identifiers that have been picked up by detection and moderation systems. This enables the cycle of abuse to continue.
Recidivism: Even after they have been penalised, abusive users may continue with their harmful behaviour, exploiting loopholes and platform limitations to resume abuse.
Common forms of cyber attacks
Hacking: Often used by abusive users to gain unauthorised access to another user’s device. It can be a tactic to steal a user’s personal data, money, or identity and may involve the use of saved passwords or device sharing with permission. This can be done by users with low to high technological skill.
Phishing: Attempts to bait users into doing something like clicking a malicious link, opening an attachment containing malware payloads or entering their own credentials into disguised websites. From this, the abusive user can view a person’s credentials or hold the credentials hostage. Gaining this unauthorised access lets the abusive user collect and store sensitive information about the person they have targeted. Spear phishing is a targeted form of phishing, in which the victim is directly targeted.
Covert audio or video recordings: Abusive users can record interactions without the other person knowing and without their consent. This could include taking screenshots or screen recordings.
Extracting personally identifiable information: Abusive users scrape identifiable details and information from an account – such a full name, address and date of birth – and from the content of interactions online.
Obfuscation of an internet protocol (IP) address: Masking or concealing the location of a user in different ways, such as:
- using a different IP address to access another person’s network or device, which can also be used to track another person’s internet activity
- using unsecured public wi-fi hotspots for hacking
- copying public wi-fi networks by using the same name for malicious purposes
- using virtual private networks (VPNs) to create a private network from a public internet connection
- using proxy servers so abusive users can engage in abusive behaviours online.
Remote surveillance: This includes spyware, stalkerware and low-tech approaches.
- Spyware allows users to remotely extract data from other users’ devices. This data can be text messages, stored media, internet activity and geographical locations. Such data can be extracted without the other user knowing it has happened.
- Stalkerware can monitor the physical location and activity of a user. This is a form of dual-use technology, or technology intended for legitimate purposes that can be misused for abusive behaviours. It can be seen in parent-child and employer-employee monitoring applications.
- Use of saved passwords on shared devices to access online accounts without the account owner knowing.
Spambots: These are computer programs that help to send spam. This also includes collecting contact information, creating fake accounts and using stolen accounts.
Find out how internal processes can detect and respond to abuse.
Negative online behaviours
The 2023 Online Safety Issues Survey found that 5.5% of respondents had done something negative online to others in the past 12 months. According to a research report from ANROWS, there are a variety of motivations that can prompt someone to carry out online harm. These include:
- anger
- to exert power, pressure and control
- retaliation/revenge
- social power/status
- entertainment
- to hurt, humiliate, frighten or annoy.
It is possible that people who cause online harm have also experienced online harm themselves. In some cases, identifying the person who has been targeted and the person who has been abusive may not be clear.
For example, eSafety has received complaints about peer group cyberbullying and adult cyber abuse where groups of people engage in chat groups or on hate pages where they are simultaneously perpetrators and victims of the same kinds of abuse.
eSafety’s research has also found some of the motivations behind image-based abuse. The research showed that people were typically motivated if it involved some element of asserting power and control, either to punish or embarrass the person targeted, or to seek social status at the cost of the that person. Learn about how community guidelines, tools and features can help prevent online harm.
Click or tap on the + to find more information about some common negative online behaviours.
Threats, manipulation and deception
Child grooming: When abusers build up trust with children under the age of 18 so they are more likely to respond to requests that can lead to sexual harm. In online grooming, adults may also pretend to be another child to deceive and gain access to young people for exploitation.
Image-based abuse (or ‘revenge porn’): Sharing, or threatening to share, an intimate image or video without the consent of the person shown. An ‘intimate’ image or video could show, or appear to show, a person who is nude or partly naked, doing a private activity (such as using the toilet or being sexual), or of someone without their clothing of religious or cultural significance (such as a hijab or turban).
Introducing people to harmful content: This content can take many forms and includes the exposure to unwanted harmful and abusive content. This can include age-inappropriate content, self-harm or terrorist and violent extremist material. This type of content can be shared in the process of grooming or manipulating someone as part of strategies for exploitation, radicalisation or abuse. The exposure to harmful online content can normalise dangerous and unhealthy behaviour for the person being targeted.
Targeting friends and family: Threatening to abuse or reveal personal information to the friends or family of a victim.
Technology-facilitated abuse (or ‘tech-based abuse’): Harmful actions carried out online or through digital technology, in the context of domestic, family and sexual violence. These include harassment, making threats, stalking and coercive or controlling behaviour.
Tracking devices, applications and spyware: Abusive users use these as tools to monitor, stalk and control other people.
Explore how moderation and enforcement can address harmful behaviours.
Who is at risk of online harm?
A range of factors may increase a person’s risk to online harm, such as:
- low digital literacy
- lack of digital access
- mental or physical illness
- isolation
- cognitive development issues
- anti-social behaviour
- previously being a target of online abuse or having been the abusive user.
A person’s relationships – including their relationship with someone who causes them online harm – may also influence their risk. This includes whether they experience domestic, family or sexual violence (either as an adult or child), as well as parental neglect, elder abuse or live in out-of-home care.
Whatever is happening in the victim’s offline world can impact how they experience online harm.
The types of online harms
Online risks and harms are complex, multi-layered and the type of online harm is often not exclusive. For example, an incident may involve multiple types of online harms, and while these can be categorised, there may be overlap.
The World Economic Forum’s typology of online harms outline some of the key categories that can be used to identify online harms, including:
- threats to personal and community safety
- harm to health and wellbeing
- hate and discrimination
- violation of dignity
- invasion of privacy
- deception and manipulation, participation, free expression or democracy.
Users may come across illegal and restricted online content on your service. This may include images and videos showing the sexual abuse of children or acts of terrorism, through to content which should not be accessed by children, such as simulated sexual activity, detailed nudity or high impact violence.
Learn more about you can take a proactive approach to preventing, detecting and moderating illegal and restricted online content.
The impacts of online harms
Online harms can be extremely serious, and the effects of the experience can be severe and long-lasting. People who have experienced online harm may have negative impacts to their:
- personal safety (fear of psychological and/or physical violence)
- health and wellbeing (anxiety, aggression, depression, self-destructive behaviour, physical health problems, intimate relationship difficulties, re-victimisation, disassociation, loss of self-esteem and confidence, withdrawal from social activities, lack of trust, substance abuse, ongoing trauma, self-harm and suicide)
- emotional wellbeing and social life (annoyance, anger, humiliation, shame, guilt, self-blame, deception, social exclusion, betrayal and/or fear)
- financial security (ability to work and earn an income, increased financial vulnerability restricted access to or knowledge of personal finances).
eSafety’s online hate research found that 54% of people who personally experienced online hate reported a negative impact from their experience – most commonly mental or emotional stress, relationship problems or reputational damage.
eSafety’s research with victims of image-based abuse (non-consensual sharing of intimate images) showed that most victims reported negative impacts of their experience.
- Two-thirds of victims felt annoyed (65%) or angry (64%) with the perpetrator, while 55% felt humiliated and 40% depressed.
- Four in ten victims reported that their most recent experience of image-based abuse negatively impacted their self-esteem (42%) or mental health (41%), while one-third said it impacted their physical wellbeing (33%).
Understanding intersectionality and at-risk communities
Inequality and disrespect often underpin abuse. Discrimination such as sexism, racism, homophobia, religious discrimination, ableism and ageism can be amplified online and the impacts on the person experiencing the abuse can be long-lasting.
People and communities who are most at-risk may also be impacted by multiple and intersecting forms of discrimination and inequality.
eSafety’s research highlights that certain groups face specific and disproportionate levels of abuse in online spaces:
- Adults surveyed who identified as sexually diverse, Aboriginal and/or Torres Strait Islander, with disability, and/or as linguistically diverse were more likely both to see (41%) and to personally experience (24%) online hate.
- Adults from targeted groups are more likely than adults who don’t belong to targeted groups to experience online hate based on discrimination or bias related to at least one aspect of their identity.
- Aboriginal and/or Torres Strait Islanders were more likely to be targeted based on their cultural identity (34% versus 12%) or ethnicity (26% versus 15%).
- Sexually diverse adults were more likely to be targeted based on their sexual orientation (58% versus 7%)
- Linguistically diverse adults were more likely to be targeted based on their race, nationality, religion or cultural identity.
Women also face disproportionate prevalence of gendered online hate compared to other demographics – being 1.6 times more likely to experience abuse on the basis of their gender.
By having this understanding of intersectionality, you can take a human-centred design approach to your digital products, services and environments.
Human-centred design and online safety
Putting user safety at the centre of the design and development of your online products and services means you’re taking a Safety by Design approach.
When you combine this with human-centred design – where you empathise with the people you’re designing for – you can develop solutions that meet the needs of all users, including those who are most vulnerable or at risk from online harm.
Remember: human-centred design is an ongoing practice that will continue to evolve with user feedback.
Learn more about the importance of consulting with others and designing products with your users in mind.
One place where you can start using a human-centred design approach is by reviewing the reporting functions on your platform. See ‘Empowering users’ for advice for designing reporting functions.
Operational guide: Advice to counter online child sexual exploitation
Following the Five Countries Ministerial meeting in July 2019, a working group of officials from Australia, Canada, New Zealand, United Kingdom and the United States developed a set of Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse (CSEA). This was carried out in consultation with some technology companies.
These principles established a baseline framework for companies that provide online services to combat the proliferation of online child exploitation and cover the following themes:
- prevent new and known child sexual abuse material
- target online grooming and preparatory behaviour
- target live streaming
- prevent searches of child sexual abuse material
- adopt a specialised online safety approach for children
- consider victim/survivor-led mechanisms
- collaborate and respond to evolving threats.
Following the release of the voluntary principles, six companies developed a ‘by industry, for industry’ guide to assist tech companies in supporting the voluntary principles.
The guide provides an overview of operational, policy and other practices that may be relevant, with suggestions for practical action based on the collective experiences of the companies involved in both developing the principles and the guide.
eSafety is currently developing a specific guide to combat CSEA that will be coming soon. It will include practical resources to help organisations of all sizes place child safety and rights at the centre of their design and development processes.
Implementation guide: Advice to counter technology-facilitated gender-based violence
In domestic, family and sexual violence situations, technology-facilitated abuse (also known as ‘tech-based abuse’) is highly prevalent.
According to an eSafety literature scan, people who are at greater risk of experiencing technology-facilitated abuse as part of family, domestic and sexual violence include:
- women and girls
- Aboriginal and Torres Strait Islander women
- women from culturally and linguistically diverse backgrounds
- women with disability
- LGBTIQ+ people
- women in regional, rural and remote areas.
eSafety has developed a specific Safety by Design guide that examines how technology can be weaponised against women and girls and includes actionable steps companies can take to mitigate these risks.
It provides practical steps that industry can take to prevent and reduce harms inflicted on women through technology, such as hate speech and harassment, deepfake pornography and stalking.
More Safety by Design foundations
Continue to Empowering users to stay safe online to explore how tools and design features can lead to safer online experiences.
Or, explore these other modules:
Last updated: 08/12/2025