Children’s Commissioner calls for immediate ban on AI sex-image apps

Software that uses Artificial Intelligence (AI) to generate sexualised images of children should be immediately banned, the Children’s Commissioner has said.

Dame Rachel de Souza warned that widely available ‘nudification’ and ‘deepfake’ apps are being exploited to produce naked and sexually explicit images of children.

Last year, the Internet Watch Foundation (IWF) received 291,273 confirmed reports of images containing child sexual abuse online.

Fear of exploitation

These bespoke apps, Dame Rachel explained, are causing children to fear that “anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image”.

She added: “The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist.

“Just deal with it. Creating the images is illegal so make the nudifying apps illegal.”

“They have no place in our society. Tools using deepfake technology to create naked images of children should not be legal and I’m calling on the government to take decisive action to ban them”.

And in an interview with The Daily Telegraph this week, the Commissioner urged the Government: “Just deal with it. Creating the images is illegal so make the nudifying apps illegal.”

Image Intercept

In 2024, the IWF recorded 245 URLs containing AI generated images of child sexual abuse, a 380 per cent increase on the previous year.

IWF interim Chief Executive Derek Ray-Hill, said: “Young people are facing rising threats online where they risk sexual exploitation, and where images and videos of that exploitation can spread like wildfire.

“New threats like AI, and sexually coerced extortion are only making things more dangerous.”

With the help of Home Office funding, the IWF has developed a tool — Image Intercept — able to identity and block content which matches one of the more than 2.8 million illegal images in its data base.

Online Safety

Social media platforms will soon be required to block children from accessing “harmful content” such as pornography and the promotion of self-harm, or else face hefty fines.

Under Ofcom’s new Protection of Children Codes, user-to-user services must implement “highly effective age assurance” measures to identify under-18s from 25 July. Such checks could involve facial age estimation or ID.

Ofcom, appointed by the Government to enforce the Online Safety Act, has power to fine companies in breach of their duties up to £18 million or 10 per cent of their qualifying worldwide revenue, “whichever is greater”. In “extreme cases”, a court could block a website or app in the UK.

Earlier this year, Fenix International Limited, which runs OnlyFans, was fined just over one million pounds by Ofcom for failing to keep children safe from pornography.

Also see:

Teaching union: ‘Smartphones give kids unfettered access to porn’

UK Govt urged to ‘stem alarming tide’ of deepfake pornography

Labour think tank backs ban on AI ‘nudification’ software