British Tech Companies and Child Safety Agencies to Examine AI's Ability to Generate Abuse Images

Tech firms and child safety agencies will be granted authority to assess whether AI systems can generate child exploitation images under new UK laws.

Substantial Rise in AI-Generated Harmful Material

The declaration came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the amendments, the authorities will allow designated AI developers and child protection groups to inspect AI systems – the foundational technology for chatbots and image generators – and verify they have adequate protective measures to prevent them from creating images of child sexual abuse.

"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous conditions, can now detect the danger in AI systems early."

Addressing Regulatory Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at averting that issue by enabling to halt the production of those materials at source.

Legislative Structure

The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI models developed to generate exploitative content.

Practical Consequences

This recently, the official toured the London base of Childline and heard a simulated call to advisors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, created using AI.

"When I hear about young people experiencing blackmail online, it is a cause of extreme anger in me and rightful anger amongst parents," he stated.

Alarming Data

A leading online safety foundation reported that cases of AI-generated abuse content – such as online pages that may contain multiple images – had more than doubled so far this year.

Instances of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, making up 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a vital step to ensure AI products are secure before they are launched," commented the head of the online safety foundation.

"AI tools have made it so victims can be targeted all over again with just a few clicks, providing offenders the capability to create potentially limitless amounts of advanced, photorealistic exploitative content," she continued. "Content which further exploits victims' suffering, and renders children, particularly female children, more vulnerable both online and offline."

Counseling Session Information

The children's helpline also released details of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Using AI to evaluate weight, body and appearance
  • Chatbots discouraging children from talking to trusted adults about harm
  • Being bullied online with AI-generated content
  • Online extortion using AI-faked pictures

During April and September this year, the helpline delivered 367 support interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for support and AI therapy apps.

Kaitlin Williams
Kaitlin Williams

A seasoned gaming journalist with a passion for slot machines and player advocacy.