In 2014, the New Zealand and Australian television presenter Charlotte Dawson tragically took her own life. Dawson, who had been open and public about her mental health struggles, was mercilessly trolled on Twitter. Her stricken friends channelled their grief by starting a petition to the Commonwealth Government calling for laws against cyberbullying
That petition bore fruit a year later. Paul Fletcher, then a parliamentary secretary, and now Minister for Communications, Cyber Safety and the Arts, pioneered the establishment of the world’s first online safety regulator—incorporating the first legislated take-down scheme for serious cyberbullying targeting a child.
As it celebrates its 5th birthday this week, eSafety’s story has been one of rapid growth, just like any other healthy 5-year-old.
Along with helping thousands of children have cyberbullying material removed from the internet, we have assisted thousands of Australians to have nude images or intimate videos shared online , without their consent, removed. And we have been able to respond to more than 40,000 reports of online child sexual abuse material by having it taken down from the internet through international, collaborative efforts.
You can find a handy summary of highlights from our first five years here.
Meanwhile a potent new form of online harm has emerged that led the Government to further boost our powers. During his terror attack on two mosques in Christchurch last year, the perpetrator live-streamed his atrocities on Facebook, as well as posting a vile manifesto online. In response, the Government passed legislation so that eSafety can now require websites to remove certain kinds of terrorist content, and even, in any repeat of an online crisis such as occurred last March, block access to sites that host such damaging material.
The Christchurch example shows the ever-evolving nature of internet harm and how profoundly the risks have changed during eSafety’s first 5 years. In internet terms, 5 years is the equivalent to a human generation. This is why I often compare our efforts in countering online safety threats to a game of whack-a-mole.
In 2015, we simply wouldn’t have imagined a terror attack could be posted live on Facebook, much less that one could be posted live on a gaming site, as occurred only a few months after Christchurch when another terrorist posted live video of his attack on a synagogue in Halle, Germany, on the gaming platform Twitch.
Moreover, 5 years ago iPhones with high-definition cameras and advanced streaming capability weren’t routinely being placed in the hands of pre-schoolers -- leaving them vulnerable to exposure to confronting content and contact on seemingly benign, fun platforms such as Roblox, TikTok and Snapchat.
And now add to this roiling threat landscape a pandemic that has forced the world to transfer so much of its social, educational and economic activity to the internet. There is no question now that the internet and smart phones have become “essential utilities.”
To be fair, the major platforms have evolved from their position in 2015, when user safety was a footnote. Facebook, Google and other companies have belatedly realised the threat to their bottom line in the reputational and revenue damage that follows a major safety transgression. They are finally investing in both human and AI systems to block harmful content and altering other policies and processes to better meet the threats.
Is it enough? Not remotely. As the meteoric rise of Zoom during the pandemic has shown, we are still in what I call the “wash-rinse-repeat” cycle, in which a new product is massively scaled up without any of the safety systems contemplated (or needed) to protect its millions of new clients. Tik Tok too is facing a hyper growth spurt but, like a gangly teenager, the maturing of its safety systems has yet to catch up.
Until the tech companies adopt what we call safety-by-design – understanding the risks and mitigating them by building in safety protections at the front end – there will continue to be online train wrecks with the safety fixes retrofitted on after the damage has been done.
That is why, as I now look down the road at the coming 5 years of online safety regulation, I want to make sure we get ahead of these issues.
We will press ahead with the next phase of safety-by-design, engaging the tertiary education sector, as well as the investment and venture capital community (who are often the “grown-ups in the room” in the early stages of a new online product).
We will increase our international collaborative efforts, in recognition that the internet knows no national borders, and that multinational cooperation is the only way to disrupt networks of online child sexual abuse. By the time we celebrate our 10th birthday, I have little doubt we will be working in close cooperation with a network of online safety regulators around the world.
And we will continue our outreach and research efforts, including greater attention to emerging technical trends so that we can anticipate new threats. Our online safety hub, the most advanced in the world, will continue to offer support to parents, educators, seniors and young people, as well as tailored resources for those in at-risk communities, including women, Aboriginal and Torres Strait Islander people, and members of the LGBTIQ+ community.
We must stay ahead of the curve. Otherwise, in particular, I fear for our kids.
I fear that the connected devices we provide our children – not just phones and tablets, but dolls and toys and other products – will leave them increasingly vulnerable to a range of nasties, from scammers to bullies and even paedophiles.
I fear that toxic online behaviours, such as bullying, abuse, and casual sexting will become far more ingrained – especially given how well we adults model these poor behaviours for our kids. We see every day how these more normalised behaviours can go wrong, with devastating impacts.
I also fear that the tsunami of hard-core pornography that floods the internet will create a generation who think that violence and domination are normal ways to express sexual intimacy. I fear for my own kids too and sometimes feel paralysed about how to protect them, knowing all that I know.
Sure, there are promising technical assists, such as age-verification for pornography sites, potentially on the horizon. But as far as children are concerned, no technology -- and no government agency, however dedicated -- can function as the principal barrier between them and the harms I’ve described.
That barrier is we -- their parents. The best filter we can provide them is bolstering those filters between their ears. We need to become just as involved in our kids’ online lives as we are in their everyday lives and give them the support and critical reasoning skills they need to navigate this complex online world.
You wouldn’t drop your six-year-old off at the Pitt Street Mall, or the Bourke Street Mall, at night and tell them to wander freely.
Then why on earth would you let them roam, unattended and unprotected, across the chaotic badlands of the internet?