Australia's online safety watchdog has warned six giant technology companies suspected of failing to take down violent extremist content.
Google, Meta, the owner of Facebook and Instagram, WhatsApp, Telegram, Reddit and X, the social media platform formerly known a Twitter, are in the sights of the eSafety commissioner today.
The companies have been issued legal notices giving them 49 days to explain how they're taking down violent content from their platforms, or they risk facing fines up to $11 million.
READ MORE: Australia's 3G network is shutting down. This is what you need to know
Esafety commissioner Julia Inman Grant says video of the horrific 2019 terror attack in the New Zealand city of Christchurch that left 51 people dead continues to circulate online.
"We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material," Inman Grant said.
"We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm."
The regulator is also concerned about evidence that messaging and social media services are being exploited by extremists.
It wants to examine what technology giants are doing, or failing to do, to protect their users.
The federal government is concerned the risk of online terrorist recruitment and radicalisation remains high in Australia and worldwide.
Well-funded and highly resourced-technology companies should be monitoring whether their products could be exploited by terrorists and other criminals.
The encrypted messaging app Telegram was the leading mainstream platform for violent extremist material, followed by Youtube, X, Facebook and Instagram, a recent OECD report found.
TikTok remains the only social media giant not signed up to a worldwide anti-extremism agreement, says Inman Grant.