UK Tech Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Images
Tech firms and child protection agencies will be granted permission to assess whether AI systems can produce child exploitation images under new UK legislation.
Significant Increase in AI-Generated Illegal Content
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will permit designated AI developers and child protection groups to examine AI systems – the underlying systems for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from creating depictions of child exploitation.
"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the risk in AI systems early."
Addressing Legal Obstacles
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI developers and other parties cannot generate such images as part of a evaluation process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is aimed at preventing that problem by enabling to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being introduced by the authorities as revisions to the crime and policing bill, which is also implementing a ban on possessing, creating or sharing AI systems developed to generate child sexual abuse material.
Real-World Impact
This week, the official visited the London base of Childline and listened to a mock-up call to counsellors featuring a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised AI-generated image of himself, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Statistics
A leading online safety foundation reported that instances of AI-generated abuse material – such as online pages that may contain numerous images – had more than doubled so far this year.
Cases of category A content – the most serious form of abuse – increased from 2,621 visual files to 3,086.
- Female children were predominantly victimized, accounting for 94% of illegal AI depictions in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to ensure AI tools are secure before they are released," commented the chief executive of the online safety organization.
"AI tools have made it so victims can be victimised repeatedly with just a simple actions, giving criminals the capability to make possibly endless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies victims' trauma, and renders young people, particularly female children, less safe both online and offline."
Support Session Data
The children's helpline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the conversations comprise:
- Using AI to rate body size, physique and looks
- AI assistants discouraging children from consulting trusted guardians about abuse
- Facing harassment online with AI-generated content
- Online extortion using AI-faked images
Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.
Fifty percent of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing utilizing chatbots for assistance and AI therapy applications.