Elara is a seasoned gambling analyst with a passion for responsible gaming and in-depth market trends.
Technology companies and child protection organizations will receive authority to evaluate whether artificial intelligence systems can produce child exploitation images under new UK laws.
The announcement came as revelations from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the government will permit approved AI developers and child protection groups to examine AI models – the underlying technology for conversational AI and visual AI tools – and verify they have adequate protective measures to stop them from creating images of child exploitation.
"Fundamentally about preventing abuse before it happens," declared Kanishka Narayan, adding: "Experts, under strict conditions, can now identify the danger in AI systems promptly."
The changes have been implemented because it is against the law to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is aimed at averting that issue by helping to stop the creation of those images at their origin.
The amendments are being added by the government as revisions to the criminal justice legislation, which is also implementing a ban on owning, creating or sharing AI systems developed to create child sexual abuse material.
This recently, the official toured the London headquarters of a children's helpline and heard a simulated call to counsellors featuring a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a sexualised AI-generated image of himself, created using AI.
"When I hear about children experiencing blackmail online, it is a cause of intense frustration in me and rightful anger amongst families," he said.
A leading internet monitoring foundation reported that instances of AI-generated exploitation content – such as online pages that may contain multiple files – had more than doubled so far this year.
Instances of the most severe material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
The law change could "represent a crucial step to guarantee AI products are secure before they are released," commented the head of the internet monitoring organization.
"AI tools have made it so survivors can be victimised repeatedly with just a simple actions, providing criminals the ability to create potentially endless amounts of sophisticated, photorealistic child sexual abuse material," she continued. "Content which further commodifies victims' trauma, and renders young people, particularly female children, less safe on and off line."
The children's helpline also published details of support sessions where AI has been referenced. AI-related risks discussed in the sessions include:
During April and September this year, Childline conducted 367 counselling interactions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic apps.
Elara is a seasoned gambling analyst with a passion for responsible gaming and in-depth market trends.