Online Safety Act reforms

For too long big technology companies the world over have been given a free pass allowing them to operate in ways that benefit their bottom line rather than the safety and wellbeing of their users. 

There have been persistent failures by Big Tech in consistently and transparently enforcing their own policies and standards, often turning a blind eye to violent online threats, hate speech, racism, misogyny, misinformation, political polarisation and targeted abuse.

This failure has led to an internet more akin to dystopian nightmare than the Utopian ideal many of the founders of these platforms likely envisaged, as we see the fragile fabric of our society is start to fray.

The big platforms have known the harms for decades and with almost limitless financial and intellectual capital at their disposal, they have always possessed the means to tackle them.

But up until now they have mostly ignored their civic responsibility online. Sure, they tinker around the edges launching new whizbang features with much fanfare, but they’ve never really knuckled down and taken safety seriously.

Six years ago, Australia effectively drew a line in the sand and created the eSafety Commissioner, the first government online safety regulator in the world, with the goal to not only protect Australian citizens from online harms, but to also address the huge power imbalance that existed between these mostly US-owned and operated tech behemoths and ordinary Australians.

And now as countries like Canada, the UK, Ireland and even the US begin to take their first tentative steps towards online regulation, Australia is poised to again lead the world with new reforms to our Online Safety Act that will grant eSafety a suite of new powers to better protect all Australians.

This week, the Australian Senate will likely debate the merits of these new reforms, which I believe have never been more necessary. 

The Covid-19 pandemic has not only had profound effects on our physical lives, but also supercharged the threats we all face online. Last year, we saw marked increases in all forms of online abuse reported to us and these elevated levels have shown no signs of abating in the first half of this year.

We’re also seeing abuse surface in new places and in disconcerting new ways. The idea that it only happens on social media platforms is already an anachronism. 

This new legislation would grant us expanded powers to protect Australians on all platforms where harm occurs, including video gaming platforms, dating websites, and private messaging apps.

In fact, a third of our youth-based cyber bullying reports are now coming from these increasingly encrypted private messaging services and they have become the primary vehicle for the sharing of child sexual abuse material and image-based abuse.

We have also seen huge increases in targeted, high-volume and increasingly vitriolic harassment directed at Australian adults.

For the first time anywhere in the world, eSafety will formally begin operating a new adult cyber abuse scheme to finally give Australian adults who are the victims of seriously harmful online abuse, somewhere to turn when the platforms fail to act.

And there will be significant financial penalties for perpetrators, so trolls will no longer feel safe to  perpetuate abuse and online hate with impunity.

But perhaps one of the most important victim-focused reforms is a halving of the time tech companies have to take down harmful content, down to 24 hours from 48.

We know most victims of online abuse don’t actually want someone prosecuted, or to appear as a witness in court. They simply want the content removed as quickly as possible which goes a long way to relieving their mental and emotional distress.

Interestingly, it’s this reform that the big tech companies have been quite vocal in opposing, claiming the shortened timeframe is not only unrealistic and unworkable, but will leave them with little choice but to overzealously block content.

As eSafety Commissioner, I’ve seen the platforms remove abusive content in as little as 12 minutes, so, in my view, with advances in artificial intelligence and investments in “around the sun” content moderation, 24 hours really isn’t that much of a stretch.

The tech industry has also been lobbying policymakers for so called “safe harbour” provisions that would effectively shelter them from the worst of this approaching regulatory storm. 

But if we grant them yet another free pass, I fear it will come at the cost of the future safety and wellbeing of all Australians online.