Report reveals the extent of deep cuts to safety staff and gaps in Twitter/X’s measures to tackle online hate

In a new transparency report released by Australia’s eSafety Commissioner, information provided by X Corp., the owner of social media platform Twitter/X, indicates the company has made deep cuts to safety and public policy personnel.  

X Corp. said Twitter/X’s global trust and safety staff have been reduced by a third, including an 80 per cent reduction in the number of safety engineers, since the company was acquired in October 2022.  

The company also said the number of moderators it directly employs on the platform have been reduced by more than half, while the number of global public policy staff have also been reduced by almost 80 per cent.  

While the company has previously given estimates of the reduction in staffing, this is the first time X Corp. has given specific figures on where the reductions were made.   Given the global nature of the Twitter/X service these cuts to key safety and public policy roles have implications for Australian users.  

X Corp. also said it had reinstated over 6,100 previously banned accounts since the October 2022 acquisition, 194 of which were previously suspended by the platform for hateful conduct violations. eSafety understands the figures provided by X Corp. relate to accounts in Australia, rather than globally. This contrasts with media reports that over 62,000 previously suspended accounts were reinstated globally. The global number of account reinstatements is significant as international accounts can and do target Australian users.  

And despite these accounts previously breaching Twitter/X’s rules, X Corp. said it did not place any of these accounts under additional scrutiny following their reinstatement. 

In June, eSafety issued a legal notice to X Corp. under Australia’s Online Safety Act seeking specific information about what Twitter/X was doing to meet the Australian Government’s Basic Online Safety Expectations in relation to online hate and enforce its own hateful conduct policy. The transparency report summarises the company’s response to those questions.  

eSafety Commissioner Julie Inman Grant said, "it’s almost inevitable that any social media platform will become more toxic and less safe for users if you combine significant reductions to safety and local public policy personnel with thousands of account reinstatements of previously banned users.  

“You’re really creating a bit of a perfect storm,” Ms Inman Grant said. “A number of these reinstated users were previously banned for online hate. If you let the worst offenders back on while at the same time significantly reducing trust and safety personnel whose job it is to protect users from harm, there are clear concerns about the implications for the safety of users. 

“We also see from X Corp.’s responses to our questions that the reduction in safety staff coincided with slower response times when users reported online hate to the platform. Response times to hateful tweets have slowed by 20 per cent since the acquisition and response times to hateful direct messages have slowed by 75 per cent, with users not receiving a response for up to 28 hours.   

“We know from that online abuse is frequently targeted at victims via services’ direct message features, with clear intent to cause harm. 

“Loss of local staff in Australia also limits the potential for engaging local communities disproportionately impacted by online hate. A recent eSafety study found that First Nations youth are three times more likely to experience hate speech online than their non-indigenous counterparts.” 

X Corp. stated in response to the Notice that it had not formally engaged with any First Nations organisations between when it ceased having public policy/trust and safety staff in Australia, and May 2023 (the date in the notice) but that it had previously engaged with a wide range of First Nations organisations and individuals over many years.  

Ms Inman Grant said, “understanding nuance and the unique cultural context of Australian communities is important to ensure platforms can tackle the online harms that can manifest and damage local communities." 

In the report, X Corp. also stated that it had no full-time staff specifically dedicated to hateful conduct issues globally, and no specific team for this policy during the period covered by the notice, although it said it had broader teams who worked on these issues. 

When eSafety asked what tools were used to detect volumetric attacks or "pile-on's" in breach of Twitter's targeted harassment policy, X Corp. stated that no tools were specifically designed to detect this type of abuse on the service. 

"I liken these attacks to someone trying to swat individual bees when they are engulfed by a killer swarm. It can feel quite overwhelming and be very damaging for the target,” Ms Inman Grant said.   

Key findings in the report include:  

  • A 30% reduction in Trust and Safety staff globally following the acquisition in October 2022.  
  • An 80% reduction in engineers focussed on trust and safety issues globally since the company’s acquisition.  
  • A 45% reduction in trust and safety staff in the Asia Pacific region 
  • Content moderators directly employed by Twitter/X were reduced by 52%. 
  • Public policy staff were reduced by 78% globally, 73% in the APAC region and a 100% reduction in Australia. 
  • Since the acquisition there had been a 20% slowing in the median time to respond to user reports about Tweets and a 75% slowing in the median time to respond to direct messages.  eSafety notes that prompt action on user reports is particularly important given that Twitter solely relies on user reports to identify hateful conduct in direct messages.   
  • As of May 2023, X Corp. reported that no tests were conducted on Twitter recommender systems to reduce risk of amplification of hateful conduct. However, X Corp. stated no individual accounts are artificially amplified, and that its enforcement policies apply to Twitter Blue accounts in the same way as other accounts. 
  • As of May 2023, automated tools specifically designed to detect volumetric attacks or “pile-ons” in breach of Twitter’s targeted harassment policy were not used on Twitter.  
  • As of May 2023, URLs linking to websites dedicated to harmful content are not blocked on Twitter. 
  • From 25 November 2022 (the date it was announced)1 to 31 May 2023, 6,103 previously banned accounts were reinstated by Twitter, which eSafety understands relates to accounts in Australia. Of these, 194 accounts were reinstated that were previously suspended for hateful conduct violations.  X Corp. stated that Twitter did not place reinstated accounts under additional scrutiny. 

X Corp. provided a response to the notice after eSafety granted the company two extensions to the original deadline. However, in several instances, the responses provided by X Corp. by the due date were incorrect, significantly incomplete or irrelevant.   

Subsequent information was provided by X Corp. after the Notice deadline that did seek to rectify earlier omissions of information provided.  This subsequent provision of information was considered a mitigating factor in the Commissioner’s consideration of the appropriate enforcement action. eSafety has given X Corp. a service provider notification confirming its failure to comply with the notice, in accordance with its powers under the Online Safety Act. 

eSafety issues service provider notifications to communicate a failure by an online service provider to comply with a legal requirement and to deter future non-compliance. 

Separately, in December 2023 eSafety commenced civil penalty proceedings against X Corp. for its alleged failure to comply with an earlier reporting notice given in February 2023 on how it was meeting the Basic Online Safety Expectations in relation to child sexual exploitation and abuse material and activity on Twitter/X.  

This followed the issuing of an infringement notice for $610,500 to X Corp in September 2023 for its failure to comply with that notice. X Corp. did not pay the infringement notice and sought judicial review of eSafety’s reliance on the transparency notice and the giving of the service provider notification and the infringement notice. eSafety is requesting the judicial review be heard in tandem with the civil penalty proceedings to avoid delays to either process. 

The full transparency report can be found here

For more information or to request an interview, please contact: