In the past three years we’ve watched a global pandemic claim millions of lives, damage hundreds of millions more, and become one of the defining issues of our time. However, something remarkable happened along the way.
In the pursuit of a solution, the pandemic brought together the best minds, years of research, innovation, vast amounts of funding and the will to achieve incredible results in record time. And within a year, we had vaccines. Vaccines that continue to save lives.
The way the pandemic galvanized the scientific community and governments around the world was unprecedented. It spurred global cooperation for vaccine research and distribution – without compromising safety.
The only delay was the need to test the vaccines and secure government approvals. We had to be sure the vaccines were safe before we put them into our bodies.
That protection – that guarantee – had to be built in. It was a perfect example of safety by design.
30 Years and Counting
So, is it unreasonable to ask the technology industry to do the same?
After almost 30 years, should we not expect its best minds, years of research, innovation, and money, to find a way to make the internet safer for children? All that’s missing is the will.
Make no mistake: technology companies have always known how people turn online platforms into weapons that generate abuse and cause real-world harm. I know– I joined the ranks of the tech industry in 1995 – a time I refer to as tech policy ground zero.
The technological weapons aimed at our children now are more dangerous than they’ve ever been. Today, pedophiles and other predators exploit popular messaging apps, such as Skype and others, to livestream child sexual abuse on-demand and to commit sexual extortion – with insufficient efforts by the platforms to prevent such devastating misuse.
If the owners and creators of the devices, platforms and services that enable people to create and distribute this tsunami of child sexual abuse material are not taking adequate steps halt the flow at its source – on the platform and prior to posting – then who will?
The Internet started going mainstream in the mid-1990s. And yet here we are a generation later still talking about how to solve the most fundamental safety issues. That’s because the so-called ‘solutions’ from the big tech companies have been no more than Aspirins and band aids.
What we need now is the equivalent of a vaccine.
COVID-19 mitigation measures such as masks, sanitising and social distancing have parallels in the online world, with things like parental controls, honest conversations with our children, and managing their screen time. Each of these measures provides some level of protection.
However, in the fight against COVID, nothing has been more effective than the systemic response our bodies generate from a vaccine. We need the same jolt for our technology ecosystem. And building safety into our technology – just like providing that systemic protection of vaccines – is the most powerful thing we can do.
We Have Developed a ‘Vaccine’
The good news is we already have the vaccine. We have the tools that allow companies to build safety into their apps and online platforms. We call it Safety by Design. Safety by Design is not just an ‘initiative’, a ‘concept’ or a ‘principles-based framework’. It is all these things and much, much more. The organisation I lead, Australia’s eSafety Commissioner, is the world’s first dedicated independent online safety regulator. We spent four years working with representatives from industry, NGOs, advocacy groups and expert advisors. We wanted to convince the online sector they could change the ethos of product design, development, and deployment by making user safety and wellbeing core considerations.
Our aim was to create resources that make Safety by Design meaningful, actionable, achievable and effective. We wanted to show that it’s not only good for people, but also good for business.
The result is a set of interactive risk assessment tools, tailored for both start-ups and established enterprises, that produce the equivalent of a safety impact assessment.
So far, these tools have been accessed in more than 46 countries, helping companies of all sizes and structures to develop safe products.
But the only way we will make the online world safer is for the platforms themselves to become part of the community developing protections, the same way our scientific community developed vaccines.
The ‘Metaverse’ Looms Large
Of course, this becomes imperative as we hurtle towards more digital complexity with immersive technologies, deepfakes, cryptocurrrencies, blockchain and distributed computing – none of which presently offer any workable solutions to remediate personal harms.
At eSafety our own data and direct experience has taught us there is no time to lose. We know, for example, the threat vectors are constantly changing. We know digital first responders are being flooded with sexual extortion that targets young people. And we know groomers are coercing children to self-produce material that exploits them sexually – on the same apps, ISPs and platforms that are pushing back on regulation.
Predators are using search engines to seek out new victims. They are exploiting dubious apps like Kik or Omegle or increasingly sharing child sexual abuse material on encrypted apps like Wickr, Signal or Telegram – where safety by design is anathema.
We now know that we need vaccines to permeate every corner of the world to achieve a suitable level of global immunity and to prevent the proliferation of virulent new strains of COVID. The same holds true in the online world. No one is truly safe until every platform, service and app is inoculated.
Disconnect Between Words and Actions
Many tech CEOs and industry leaders say their organisations are leaning into regulation. They say they are doing everything they can to prevent harm to children. However, the actions of these tech companies on the digital regulation front line reveals a different story.
Every segment of the tech sector from ISPs to consumer apps, from data hosting providers to messaging services has pushed back against taking proactive steps that require them to do more than the status quo.
Meanwhile, children are using devices, platforms and services to create, host and distribute extremely damaging sexualised material that may injure, haunt or traumatise them for the rest of their lives.
This is not a potential harm or a future threat. It is happening now. At scale.
The Power of Global Action
eSafety has been regulating the tech sector to identify, remove and limit the spread of child sexual abuse material for seven years.
In Australia, we’ve just reformed our legislation so we can take the next steps to test whether the technology industry is deploying safety by design in meaningful ways.
With the progress on UK’s Online Safety Bill and Europe’s Digital Services Act, we can see the groundswell for a more collaborative, global approach to rectifying the years of neglect by technology companies that host user-generated content or allow people to communicate on digital platforms.
The future of online safety is a world where democratic governments share a duty of care with the tech industry. For this to happen, we need to work together across boundaries and borders – just as the global scientific community did to develop vaccines.
No-one is Safe Until Everyone is Safe
The tech companies believe online safety regulation is not a foregone conclusion. So, governments and civil society everywhere need to recognise this is still a fight. And it’s worth fighting for because at stake is the sanctity of childhood and the dignity of our children.
Tech companies can and must do more. If we can target consumers with precision advertising, we can target illegal content and conduct that seriously harms children across the globe.
Enough with the platitudes. We have run out of patience. We need to treat the scourge of online child sexual abuse material like the global pandemic it is. We must send the strongest message to the industries providing us with online interactivity: You have fundamental responsibilities, and it is way past time you started to meet them.
There is no question we live in challenging times. Our politics are polarized. The great nations of the world view each other with fear and distrust. The cost of living in most countries is rising, while stock markets are falling. So, it’s natural to look inward, retreat, bunker down and be insular.
However, the mistake we cannot make is to be so insular that we don’t see the internet as a global force without boundaries – and the metaverse and Web 3.0 as the next version of the internet.
Yes, we must protect our own citizens, especially our children. But we’ve also got to protect the children of the world. And, just as we have learned with COVID, no one is safe until everyone is safe.
Julie Inman Grant is Australia's eSafety Commissioner and a board member of the WePROTECT Global Alliance. This blogpost was first published in Tech Policy Press.