UK Technology Firms and Child Protection Officials to Test AI's Capability to Generate Exploitation Images

Tech firms and child protection agencies will be granted authority to assess whether AI systems can generate child exploitation images under recently introduced UK legislation.

Significant Increase in AI-Generated Harmful Material

The announcement coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will permit designated AI companies and child safety organizations to inspect AI models – the foundational systems for conversational AI and visual AI tools – and verify they have adequate safeguards to prevent them from creating depictions of child sexual abuse.

"Ultimately about preventing exploitation before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the risk in AI models promptly."

Addressing Legal Challenges

The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such content as part of a evaluation process. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is designed to averting that problem by enabling to halt the creation of those images at their origin.

Legal Framework

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, producing or distributing AI models developed to generate exploitative content.

Real-World Impact

This week, the minister visited the London base of Childline and listened to a simulated conversation to counsellors involving a account of AI-based abuse. The call depicted a teenager seeking help after facing extortion using a explicit deepfake of themselves, created using AI.

"When I learn about children experiencing blackmail online, it is a source of intense anger in me and justified anger amongst parents," he stated.

Concerning Statistics

A leading internet monitoring foundation reported that instances of AI-generated exploitation material – such as webpages that may include numerous images – had more than doubled so far this year.

Cases of category A material – the gravest form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were predominantly targeted, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "constitute a vital step to guarantee AI tools are safe before they are launched," commented the head of the internet monitoring organization.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the ability to make possibly endless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further exploits survivors' suffering, and makes young people, especially girls, more vulnerable both online and offline."

Support Session Information

The children's helpline also released information of support sessions where AI has been referenced. AI-related harms discussed in the sessions include:

  • Using AI to rate weight, physique and appearance
  • Chatbots discouraging young people from consulting safe adults about abuse
  • Being bullied online with AI-generated material
  • Online blackmail using AI-manipulated pictures

During April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapeutic applications.

Jacob Daniel
Jacob Daniel

Elara is a seasoned gaming analyst with over a decade of experience in the online casino industry, specializing in slot mechanics and player trends.