UK Technology Firms and Child Protection Agencies to Examine AI's Capability to Generate Exploitation Content

Technology companies and child safety organizations will be granted authority to assess whether artificial intelligence tools can generate child abuse material under new UK legislation.

Substantial Increase in AI-Generated Illegal Content

The declaration came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will allow designated AI developers and child protection groups to examine AI systems – the underlying technology for chatbots and image generators – and ensure they have adequate safeguards to stop them from producing depictions of child sexual abuse.

"Ultimately about stopping abuse before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the risk in AI models promptly."

Addressing Regulatory Challenges

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is designed to averting that problem by helping to stop the creation of those images at their origin.

Legal Structure

The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI systems developed to create child sexual abuse material.

Practical Impact

This recently, the minister visited the London headquarters of Childline and heard a mock-up conversation to counsellors involving a account of AI-based abuse. The interaction depicted a teenager requesting help after facing extortion using a sexualised deepfake of themselves, created using AI.

"When I learn about young people experiencing extortion online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.

Alarming Data

A prominent online safety foundation reported that instances of AI-generated exploitation material – such as online pages that may contain numerous files – had significantly increased so far this year.

Cases of category A material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of prohibited AI depictions in 2025
  • Depictions of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "represent a crucial step to guarantee AI products are secure before they are released," stated the chief executive of the online safety organization.

"Artificial intelligence systems have made it so victims can be targeted repeatedly with just a simple actions, providing offenders the ability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which further exploits victims' trauma, and renders young people, particularly female children, more vulnerable both online and offline."

Support Session Information

Childline also released information of counselling interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Using AI to evaluate weight, body and looks
  • AI assistants dissuading children from consulting trusted adults about harm
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-faked pictures

During April and September this year, the helpline conducted 367 support interactions where AI, chatbots and associated topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapy apps.

Christopher Huffman
Christopher Huffman

Elara is a novelist and writing coach passionate about helping others unlock their creative potential through practical guidance.