The number of AI-generated sexual abuse videos rocketed by nearly 26,000 per cent in 2025, the Internet Watch Foundation (IWF) has reported.
The group’s Annual Data & Insights Report 2025 revealed that 3,443 AI-generated videos were found last year, in contrast to just 13 in 2024. Nearly two-thirds (65 per cent) were classed as Category A, the most severe form of abuse, a higher ratio than non-AI videos (46 per cent).
The report warned: “This highlights the concern that AI tools are enabling the creation of significantly more severe content, often using real images of victims or survivors to generate this material, which further increases the revictimisation of children.”
Social media
The IWF pointed to current trends in child sexual abuse, including new forms of AI-generated material and the threat towards girls and older teenagers.
It said: “We encountered AI companion sites which offered explicit conversations with simulated child characters and the ability to generate criminal images alongside the chats.
“Alarmingly, members of the public found these services by clicking on ads on mainstream social media platforms, making it apparent how easy it is to access this content.”
In response, the charity called the Government to ban ‘nudification’ tools and close “legal loopholes to ensure AI-generated material is treated the same as other forms of child sexual abuse material in jurisdictions beyond the UK”.
‘Nudification’ tools
The Government has pledged to ban dedicated ‘nudification’ tools under the Crime and Policing Bill, which is awaiting Royal Assent.
Technology Secretary Liz Kendall stated: “We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes.”
While it is a criminal offence to create or share deepfake explicit images of children, such tools are not yet illegal.
House of Lords votes to end Wild West of online pornography
Ofcom fines porn company £1.35m over ‘child protection’ failures
