Tech firms and child protection agencies will be granted permission to evaluate whether artificial intelligence systems can generate child abuse images under new UK legislation.
The announcement coincided with revelations from a protection watchdog showing that cases of AI-generated CSAM have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Under the amendments, the authorities will allow approved AI companies and child protection organizations to inspect AI models – the underlying systems for conversational AI and image generators – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Ultimately about stopping exploitation before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI models early."
The changes have been introduced because it is against the law to produce and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to averting that issue by enabling to halt the production of those materials at their origin.
The changes are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, producing or sharing AI systems developed to create child sexual abuse material.
This recently, the minister visited the London base of a children's helpline and heard a mock-up call to advisors involving a report of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a cause of intense anger in me and rightful concern amongst families," he said.
A leading internet monitoring organization reported that cases of AI-generated abuse material – such as webpages that may include multiple files – had more than doubled so far this year.
Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.
The law change could "constitute a vital step to guarantee AI products are safe before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, providing criminals the ability to make possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which further exploits victims' trauma, and renders young people, particularly female children, more vulnerable both online and offline."
The children's helpline also published information of counselling interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:
Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and related terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, encompassing using chatbots for assistance and AI therapy applications.
A passionate horticulturist with over 10 years of experience in organic gardening and landscape design.