The British internet safety watchdog Internet Watch Foundation (IWF) has reported the rapid spread of materials depicting sexual violence against children (CSAM) generated with the help of artificial intelligence.
According to the researchers’ report, in a single month on one darknet forum more than 20,254 AI-generated CSAM images were found. Of these, more than half violated Britain’s child protection laws and other statutes.
Analysts warned that the flow of such materials could overwhelm the network.
“Malicious actors can legally upload all that is needed to create these images, and then generate them as many times as they wish offline, with no possibility of tracking. There are various tools for enhancing and editing such materials, until they look exactly as the criminal desires,” said the IWF.
Most AI-generated CSAM images are “sufficiently realistic,” experts said. The most convincing images would be hard to distinguish even for an experienced analyst.
The IWF noted an increased likelihood of “re-victimisation” of known victims of sexual violence. In addition, researchers found many CSAM images featuring child celebrities.
The report states that new technologies have given criminals an additional way to make money.
“The creation and dissemination of AI CSAM guides is not currently illegal, but may become so. The legal status of the models used to generate the images is a more complex issue,” the document says.
In September the organisation warned that pedophile groups are discussing and exchanging tips on creating illegal images using open-source AI tools that can be run locally on personal computers.
The IWF called for international cooperation to combat CSAM. The initiative envisages changes to legislation, improvements in training for law enforcement personnel, and the establishment of regulatory oversight of AI models.
Experts also recommended that developers ban the use of AI to create materials depicting child abuse, amend the relevant language models, and prioritise the removal of such content.
For a long time, the problem of illicit materials generated by neural networks has been discussed within the community. Earlier, Microsoft President Brad Smith proposed using a KYC system, modelled on financial institutions, to identify criminals who use AI to disseminate misinformation and other unlawful activities.
In July, the state of Louisiana passed a law tightening penalties for the sale and possession of child pornography created with algorithms. Offenders face prison terms of 5 to 20 years and/or fines up to $10,000.
In August the U.S. Department of Justice updated the Citizens’ Guide to Federal Child Pornography Law. It clarifies that child pornography images are not protected by the First Amendment and are illegal.
In May, one of the pioneers of the field, Geoffrey Hinton warned about potential threats associated with AI. Earlier, former Google chief Eric Schmidt stated that the technology presents an “existential risk,” which could cause many people to suffer or die.
