Christchurch shifted the online world on its axis

This month marked the first anniversary of the Christchurch terrorist attack. 

On 15 March 2019, an extreme right-wing terrorist murdered 51 people attending Friday prayers at the Al-Noor Mosque and Linwood Islamic Centre, injuring another 50. The dead included fathers and their sons, children and teens, leaders in the community. The world grieved with New Zealand as the nation struggled to understand the worst violence in its history. 

Apart from its appalling scale, this act of terror was distinguished for another reason. This was an attack that was planned with the internet as an integral component. In the hours prior, the attacker posted a hate-filled manifesto to a notorious imageboard before livestreaming the first and most deadly 17 minutes of his assault via Facebook. 

While the livestream had a relatively small audience, the recording of the attack went viral. Within days, Facebook had removed 1.5 million uploads of the attack video, and Twitter hundreds of thousands more. At least 800 different versions were detected by major platforms.

A targeted response

The Australian Government responded quickly to the virality of the attack video. Amendments to the Commonwealth Criminal Code created offence provisions concerned with tackling the spread of perpetrator-produced material showing violent acts of terror, murder and attempted murder, rape and kidnapping — ‘abhorrent violent material’ (AVM). The laws provide the eSafety Commissioner with the power to notify content and hosting services about AVM on their platforms. A failure to remove AVM notified by the eSafety Commissioner potentially exposes individuals and companies to criminal prosecution by Commonwealth law enforcement authorities. 

In addition to these amendments, a taskforce was convened by the Prime Minister to examine the use of the internet by terrorists and violent extremists. Comprising law enforcement, government and industry representatives, the taskforce published its report in June 2019. Among the report’s 30 recommendations were measures to enhance transparency reporting by platforms, to create a 24/7 government response capability, and for the eSafety Commissioner to direct Australian internet service providers (ISPs) to block terrorist and violent extremist material in limited circumstances. 

Neither set of powers are ones the eSafety Commissioner employs lightly. To date, we have dealt with around 700 public complaints about content meeting the definition of AVM. The vast majority of these concern the penetrative sexual assault of children – that is, the rape of children – and we have largely managed these through the implementation of our Online Content Scheme and with our longstanding membership of the global INHOPE network. Consisting of 45 internet hotlines around the world, INHOPE strives to take down child sexual abuse material at its source.  

Rather than ‘go to war with the internet’, we have used our discretion to issue AVM notices to a limited number of content and hosting services. These have been focused on the very worst content and consist of 18 notices targeting 10 items of AVM. Seven of these items include the Christchurch attack video and the video recorded by a terrorist during his murderous spree in Halle, Germany, on Yom Kippur 2019. Other items include the beheading of Scandinavian tourists in Morocco in late 2018. Our efforts have succeeded, with 70% of material notified by the eSafety Commissioner removed. 

Similarly, the eSafety Commissioner applies stringent thresholds to the question of when to issue a blocking direction to ISPs. In September 2019, we directed major Australian ISPs to block eight sites still hosting the Christchurch video and manifesto. This direction sought to preserve for a limited time the blocks imposed voluntarily by ISPs in the immediate wake of the Christchurch attacks. Before issuing the direction, the eSafety Commissioner consulted with both the affected websites and the ISPs subject to the direction, leading several of the sites to remove content at our request. Only those sites that declined to assist were included in the blocking list. 

Looking ahead

Today, the Government is announcing the second blocking arrangement recommended by the taskforce. This joint protocol between the eSafety Commissioner and major Australian ISPs provides for a time-limited and narrow blocking direction against sites distributing terrorist and violent extremist material in an ‘online crisis event’. There are two steps to defining an online crisis event. First, material must be distributed in a manner likely to cause significant harm to the Australian community. Second, the situation must require a rapid, coordinated and decisive response from government and industry. In such a case, the eSafety Commissioner may direct ISPs to block websites responsible for the sharing of that material so long as it is reasonable, necessary and proportionate to do so. 

Both the AVM notice and ISP blocking powers serve to complement and support the eSafety Commissioner’s existing online content regulation powers.

These are set out in the Online Content Scheme established via Schedules 5 and 7 to the Broadcasting Services Act 1992 (Cth). The Scheme provides the eSafety Commissioner with powers to enforce takedown notices against prohibited content hosted in Australia. Since 2015, the eSafety Commissioner has used these powers, in concert with strong law enforcement partnerships, to do our part to keep Australia from becoming a safe harbour for illegal and harmful material. 

Having recently passed the sombre first anniversary of the Christchurch atrocity, we reflect on the ways the world has changed in the past twelve months. Governments around the world, joined by industry, have issued a clarion appeal for the civilising of the internet, while also striving to preserve its free and open character. The globally supported Christchurch Call, along with the Australian Prime Minister’s taskforce, has set out a roadmap to harden online technologies from being misused as a tool for violent and extremist propaganda.  

We also recognise that there will be times when the threat needs to be confronted head on. Effective and targeted regulatory interventions, backed by strong sanctions, are critical to combatting the worst online harms perpetrated by those whose only language is violence. We must acknowledge that the Christchurch attacks shifted the world on its axis, and that it is likely only a matter of time until we see the internet once again used as a tool of extreme hate. When it is, we will be prepared. 

Now, however, we send our most heartfelt condolences to those whose lives were shattered on 15 March 2019.