Skip to main contentSkip to navigationSkip to navigation
Facebook, Instagram and Whatsapp icons on phone screen.
The eSafety commissioner says there is new violent extremist content coming online that tech giants such as Google and Meta may not be identifying quickly. Photograph: Michele Ursi/Alamy
The eSafety commissioner says there is new violent extremist content coming online that tech giants such as Google and Meta may not be identifying quickly. Photograph: Michele Ursi/Alamy

Australian eSafety commissioner puts tech companies on notice over reports terror-related content still being shared

This article is more than 1 month old

Julie Inman Grant has asked companies including Google, Meta and Telegram to explain how they are taking action against violent and extremist material

Australia’s online safety regulator has issued notices to Telegram, Google, Meta, Reddit and X asking how they are taking action against terror material on their platforms.

It is five years since an Australian murdered 51 people at two mosques in Christchurch in New Zealand, and broadcast the massacre on Facebook live. Australia’s eSafety commissioner, Julie Inman Grant, said she still receives reports that video and other perpetrator-produced material from terror attacks are being shared on mainstream platforms, although there were now slightly less on mainstream platforms such as X and Facebook.

She said there was new violent extremist content, including beheadings, tortures, kidnapping and rapes coming online that the platforms may not be identifying as quickly.

Under the legal notices issued this week, Inman Grant used her powers under the Online Safety Act to ask the companies a set of questions about their systems and processes to identify the content and prevent people being exposed to it, noting each company would have differences.

“It varies tremendously within each of these companies,” she said. “YouTube is so widely viewed by so many, including a lot of young people, from the radicalisation perspective. Telegram has different concerns altogether, because it is really about the prevalence of terrorist and violent extremism, the organisation and the sharing that goes on there.”

A 2022 OECD report found Telegram hosted more terrorist or violent extremism content, followed by Google’s YouTube, X – then Twitter, and Meta’s Facebook. The companies issued notices will have 49 days to respond.

The regulator is now involved in an ongoing lawsuit with the Elon Musk-owned X platform after the company failed to pay an infringement notice related to a similar notice issued last year about how the company was responding to child abuse material on its platform.

X has appealed against the commissioner’s decision, and the eSafety commissioner is also suing the company over failing to pay the $610,000 fine. Inman Grant said her office had been in communication with X about the planned terrorism-related notices before they were issued.

Inman Grant also said Telegram had previously responded to takedown notices issues. She said not much was known about the safety systems the messaging app may have in place.

skip past newsletter promotion

The regulator also said the notices would seek information on what the companies could do to prevent generative AI being used by terrorists and violent extremists.

“These are the questions that we’re trying to get to, what are the guardrails you are putting in place with generative AI and really trying to ascertain how robust and effective they might be.”

There would also be questions focused on X’s new “anti-woke” generative AI, Grok.

“We’re going to ask X questions about Grok, which had has been defined in their own marketing materials as being spicy and rebellious and I am not sure what the technical meaning of that is,” she said.

Most viewed

Most viewed