British Tech Firms and Child Safety Officials to Examine AI's Capability to Create Abuse Content

Technology companies and child safety agencies will be granted permission to assess whether AI systems can generate child abuse material under recently introduced British laws.

Significant Increase in AI-Generated Illegal Material

The announcement coincided with findings from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI companies and child safety groups to inspect AI systems – the foundational systems for chatbots and image generators – and ensure they have sufficient safeguards to stop them from creating images of child exploitation.

"Ultimately about stopping exploitation before it happens," stated Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI models promptly."

Tackling Legal Obstacles

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is aimed at preventing that problem by enabling to stop the creation of those materials at source.

Legal Framework

The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI systems developed to generate exploitative content.

Real-World Impact

This week, the official toured the London base of a children's helpline and heard a simulated call to counsellors featuring a account of AI-based exploitation. The interaction depicted a teenager seeking help after being blackmailed using a explicit deepfake of himself, constructed using AI.

"When I learn about children experiencing extortion online, it is a source of intense frustration in me and rightful anger amongst families," he stated.

Alarming Data

A leading internet monitoring foundation stated that cases of AI-generated exploitation content – such as online pages that may include numerous images – had significantly increased so far this year.

Instances of the most severe material – the most serious form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a vital step to ensure AI tools are safe before they are released," stated the head of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to create potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she added. "Material which further commodifies survivors' suffering, and makes young people, particularly female children, less safe both online and offline."

Support Interaction Information

Childline also published information of counselling interactions where AI has been mentioned. AI-related risks mentioned in the sessions comprise:

  • Employing AI to rate body size, body and looks
  • Chatbots discouraging young people from consulting safe guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-manipulated images

Between April and September this year, the helpline delivered 367 counselling interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were connected with mental health and wellness, encompassing utilizing chatbots for support and AI therapeutic apps.

James Horton
James Horton

Felix is a seasoned gaming analyst with over a decade of experience in online casinos and player trends.