Online abuse can come in many forms, and we have seen it all at eSafety in our six-year existence.
One question that crops up frequently is the difference between defamation posted online and serious online abuse.
And it’s not always clear cut, or easy for a non legal-eagle to digest. Below we talk about some of the differences, and how eSafety’s new legislation will work in the new year when it comes to addressing the online abuse of adults.
The new Online Safety Act
Reforms contained in the Online Safety Act give the eSafety Commissioner powers that are important to helping more Australians who are experiencing online harm.
One of the key elements of the new act is that from 23 January 2022, for the first time anywhere in the world, eSafety will begin operating an Adult Cyber Abuse Scheme. This will give Australian adults who are the victims of seriously harmful online abuse somewhere to turn when the platforms fail to act on reports about it. It expands eSafety's suite of powers, which currently include a Cyberbullying Scheme for young Australians under 18 and an Image-Based Abuse Scheme that deals with the non-consensual sharing of intimate images of an Australian of any age.
Adults who are subjected to serious online abuse should firstly report it to the platform. From 23 January 2022, if the platform does not action the report, the abuse can be reported to eSafety. We will act as a safety net if the report meets the high threshold of 'seriously harmful' abuse, and issue a notice to the platform to get the content removed.
eSafety will also have the ability to issue significant penalties for failure to comply with a notice to remove adult cyber abuse material.
What constitutes ‘serious abuse’ in the new Adult Cyber Abuse Scheme?
The threshold is high, with two parts. The abuse must be intended to cause serious harm – like threats causing fear – AND be menacing, harassing or offensive in all the circumstances. Harm will generally be ‘serious’ when it endangers (or could endanger) a person’s life, or could have some form of lasting effect on a person.
Somebody finding something offensive or disagreeable is not enough, the content must also be intended to cause serious harm to that individual.
There is an expectation that adults have a higher level of resilience than children, and as such the threshold is much higher than eSafety's Cyberbullying Scheme for under 18s.
We recognise that the threshold is high and there may be instances where we can't take regulatory action. Every situation is unique and eSafety is committed to helping all Australians. Even if a matter does not meet the threshold, we will offer information and guidance to ensure that person feels supported and is aware of other options they might be able to take.
What is defamation?
Defamation is a civil action, determined by Courts. It is designed to balance the right of freedom of speech with protecting a person’s reputation against harm.
Defamation is determined based on its own body of law and thresholds, which are very different from the Adult Cyber Abuse Scheme thresholds outlined above.
In Australia, defamation law is determined by state and territory legislation. However, Model Defamation Provisions (MDP) were agreed in 2005 and each state and territory enacted legislation to implement the provisions, meaning there is a nationally consistent approach where possible. A review of the MDPs commenced in 2019 and is ongoing.
eSafety will continue to ensure its regulatory activities, including the Adult Cyber Abuse Scheme, coexist with defamation laws within Australia.
eSafety’s new legislation and defamation laws serve different purposes. Our legislation is designed to provide 'harms minimisation', by removing harmful content. Defamation laws are about compensation for damage caused to reputations.
The Adult Cyber Abuse Scheme is designed as a safety net to ensure the prompt removal of material which is intended to cause serious harm. Purely reputational matters are not part of the scheme.
eSafety’s scheme is not intended to arbitrate the truth or falsity of statements made on the internet or whether those statements are or are not defamatory. As it stands now, complex defamation cases in the courts can be drawn out for months of deliberation.
However, in some cases material posted which might be defamatory could ALSO meet the threshold of adult cyber abuse, if the intention to cause serious physical or psychological harm can be established and it is determined that the material was menacing, harassing or offensive.
In those circumstances, from January 2022, this means that an Australian could come to eSafety to have content removed, and or also elect to take defamatory action.
eSafety’s additional new powers: obtaining information for anonymous accounts
Under the new Online Safety Act, from January 2022, eSafety will have stronger information-gathering powers to obtain identity information including basic subscriber information (BSI) for anonymous accounts. The usefulness of data will be dependent upon the kinds of BSI the platform collects, which does vary – and this will be used by eSafety as the basis for further investigation.
However, those powers need to be connected to the Act we administer. This means eSafety would need to be managing a complaint or undertaking an investigation into image-based abuse, cyberbullying, adult cyber abuse or illegal online content in order to use these powers. Anything falling outside of that (for example, defamation or social engineering) won’t qualify. Identity information collected through these new powers would not be able to be shared with complainants so they know who is behind the post, but would be used in any enforcement action we take against the person posting the content.
We will be able to use powers to obtain user information under the new Act if we believe the information is relevant to the operation of the Act, as well as more extensive investigative powers (such as obtaining documents) to support an investigation. The powers need to have a sufficient connection with the Act, which means that harms falling outside our regulatory remit will not qualify.
Our information-gathering and investigative powers do no more than open up additional lines of enquiry for us to pursue – and may ultimately lead to no useful information. It is not uncommon for abusers online to cover their tracks through multiple overlapping fake accounts, pseudonyms and other techniques that hide their identity.
Updated: 13 October 2021