
Canberra, Australia — February 8, 2026
Global pressure is mounting on major technology platforms as online child sexual abuse content continues to rise sharply, despite stricter regulations, AI moderation tools, and new social media restrictions for minors.
The renewed focus comes after controversy surrounding AI-generated explicit images, including recent concerns linked to generative AI systems, and as governments intensify efforts to protect children online. In Australia, authorities have already banned social media access for children under 16, yet new data suggests the crisis remains far from under control.
📊 Abuse Reports Surge Despite Safeguards
According to the Australian Centre to Counter Child Exploitation, nearly 83,000 reports of online child sexual abuse were recorded during 2024–25, marking a 41% increase compared to the previous year.
The majority of these reports involved mainstream digital platforms, raising concerns that existing safeguards implemented by large technology companies are insufficient or unevenly applied.
🛡️ Big Tech Ordered to File Transparency Reports
In response to the growing threat, Australia’s eSafety Commissioner Julie Inman Grant has directed major tech firms — including Google, Apple, Microsoft, and Meta — to submit mandatory transparency reports every six months.
While the latest reports show some progress, regulators say they also reveal serious gaps in child safety enforcement, particularly on live video and encrypted platforms.
⚠️ Faster Takedowns, But Gaps Remain
The report highlights improvements in detecting:
-
Child sexual abuse material (CSAM)
-
AI-generated exploitative content
-
Online grooming and sexual extortion
For example:
-
Snap reduced average response time for removing abusive content from 90 minutes to 11 minutes
-
Microsoft expanded detection capabilities within Outlook services
However, regulators noted that Meta and Google do not actively monitor live-streaming abuse on services such as Messenger and Google Meet, despite having detection tools on other platforms.
Similarly, Apple and Discord were flagged for lacking proactive detection systems, with Apple still relying heavily on user reports rather than automated safety tools.
🎥 Live Video and Encrypted Platforms a Major Concern
The report identifies live video and encrypted communication platforms as the most dangerous blind spots.
Platforms including:
-
Apple Messages
-
Discord
-
Google Chat and Meet
-
Microsoft Teams
-
Snap
are not using available software to detect child-related sexual extortion, according to the findings. Regulators warn that real-time abuse remains difficult to identify and stop without stronger technical safeguards.
📈 New Safety Dashboard Launched
Alongside the report, Australia’s eSafety office launched a public tracking dashboard, designed to monitor:
-
Safety technologies used by platforms
-
Volume of content removed after user reports
-
Number of trust and safety staff employed
The dashboard aims to increase public accountability and transparency across the tech industry.
⚖️ Push for “Digital Duty of Care” Laws
Commissioner Inman Grant emphasized that reporting alone is not enough. She called for stronger legislation requiring companies to prove their platforms are safe before launch.
Under proposed “digital duty of care” laws, platforms could be legally obligated to:
-
Identify risks in advance
-
Use language-analysis tools
-
Deploy real-time warning messages
-
Implement deterrence systems to prevent harmful behavior
🚨 Prevention Must Match Detection
The report stresses that prevention and awareness are as critical as detection.
Warning prompts, behavioral deterrence messages, and real-time alerts have shown success in discouraging harmful actions and guiding users toward help resources — yet many platforms have been slow to adopt these tools at scale.
💬 Safety Over Profits
Australia’s eSafety Commissioner issued a clear message: child safety online must be a non-negotiable priority.
“Technology already exists,” the report concludes, “but too many companies hesitate to deploy it fully due to concerns over user growth, encryption, or revenue.”
As global scrutiny intensifies, regulators say the challenge is no longer about finding solutions — but about forcing faster action.










