This is the text of an article by eSafety Commissioner Julie Inman Grant that appeared in The Age on 23 April 2020.
Breaking up, they say, is hard to do. But in the tech sector, scaling up can be worse.
As COVID-19 changed the way the world works, learns and plays, Zoom rapidly became the darling of the internet. Between December 2019 and last month, the communications software company increased the number of its daily users around twenty-fold, to around 200 million.
And then, as if on cue, came “Zoombombing.”
Perhaps you’d welcome the comedian Hamish Blake gate-crashing your marketing meeting. (And to be fair to Hamish, he actively solicits invitations to the meetings he Zoombombs.)
On the other hand, you’d be appalled if a random online troll inserted pornography into your child’s Year Three maths class, uploaded a terrorist beheading video to your morning news conference, or unleashed a barrage of racist hate onto your prayer group.
In response to the persistent wave of online harassment and harm plaguing the platform, Zoom CEO Eric Yuan admitted, “I’ve never thought about the threat of online harassment seriously.” He pledged to make the app and software safer, even at the cost of usability, by “transforming the business to a privacy- and security-first mentality.”
Too little, too late.
In an industry whose guiding ethos is “Move fast and break things,” it usually takes some kind of tech-wreck before the big online companies will act in the interest of their users’ safety. They seem to need a real and present threat to their reputation, revenues, or regulatory comfort-zone before they will take user safety seriously.
And while the larger, established platforms might be able to invest in more robust processes and technology protections, as Facebook did in response to the Christchurch terrorist’s live-streamed atrocities, a mid-sized enterprise like Zoom finds itself repeatedly wrong-footed.
At eSafety, the harms we deal with every day are not inherent to information technology: they are the exercise of human malice. They tend to be the result of people weaponizing technology against other individuals with the intent to harm, or elicit gain. We call this “social engineering,” which in the online context involves the exploitation of human psychology to manipulate or encourage certain behaviours online.
COVID-19 has triggered a spike in online social engineering. We’ve seen scams intended to defraud socially isolated seniors of their hard-earned superannuation; hijacked profile and imposter scams; and fake news and misinformation about coronavirus cures and vaccines. Just this past week, we have received a sharp increase of reports in a resurgent sextortion scheme that threatens to release the intimate videos you’ve (supposedly) captured on your web-cam onto the internet if you don’t make a Bitcoin payment.
In some areas of internet harm, eSafety is seeing increases in reports of more than 300 per cent over the average.
If there is any silver lining to this darkening cloud, it is that these online harms are preventable, through education and awareness, and by honing the critical reasoning skills of those most vulnerable to this predatory online behaviour. The primary lesson we teach is that people should question every unsolicited email or approach they receive online: Do I know who this person or institution is? Can I verify this? Are they trying to build intimacy too fast? Are they making too many excuses?
There are a range of online safety materials at esafety.gov.au to help you manage these risks. And when things do go wrong you can always collect evidence and report to us, to Scamwatch (a website run by the Australian Competition and Consumer Commission) or to your local police.
But in the long term, the best preventative measure involves shifting the burden for online safety back on the platforms themselves. We call this “Safety by Design.” It means that online harms are anticipated from the beginning. Protections are built in, and potential misuse is engineered out.
Safety by Design provides tech companies with a road-map for developing the safety and security features users deserve, without undermining those users’ experience of the app or software. It can prevent online mishaps such as Zoombombing, and the resulting loss of reputation and revenue.
For Zoom, and for whatever zoombabies are in gestation right now, that surely beats becoming the next cautionary tech-tale in the age of COVID-19.