- June 13th, 2025
So it’s possible that context, pose or potentially even use of an image can have an impact on the legality of the way an image is perceived. While the Supreme Court has ruled that computer-generated images based on real children are illegal, the Ashcroft v. Free Speech Coalition decision complicates efforts to criminalize fully AI-generated content. Many states have enacted laws against AI-generated child sexual abuse material (CSAM), but these may conflict with the Ashcroft ruling. The difficulty in distinguishing real from fake images due to AI advancements may necessitate new legal approaches to protect minors effectively.
Sexual activity metadata: ‘Self-generated’ and 3-6-years-old
Justice Department officials say they’re aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating “deepfakes” and other harmful imagery of kids can be prosecuted under their laws. With the recent significant advances in AI, it can be difficult if not impossible for law enforcement officials to distinguish between images of real and fake children. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. Laws like these that encompass images produced without depictions of real minors might run counter to the Supreme Court’s Ashcroft v. Free Speech Coalition ruling. That case, New York v. Ferber, effectively allowed the federal government and all 50 states to criminalize traditional child sexual abuse material.
Is it illegal to use children’s photos to fantasize?
The report was produced after a search of 874 Telegram links reported to SaferNet by internet users as containing images of child sexual abuse and exploitation. SaferNet analyzed child porn them and found that 149 of them were still active and had not been restricted by the platform. In addition, the NGO identified a further 66 links that had never been reported before and which also contained criminal content.
- Below are six clarifications of common misunderstandings many adults have articulated on our Helpline while attempting to make sense out of confusing situations.
- These sentiments come in the wake of the arrest of Darren Wilken, a Midrand man accused of creating and distributing child pornography on a global scale earlier this month.
- Perhaps the most important part of the Ashcroft decision for emerging issues around AI-generated child sexual abuse material was part of the statute that the Supreme Court did not strike down.
In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved. In 1982, the Supreme Court ruled that child pornography is not protected under the First Amendment because safeguarding the physical and psychological well-being of a minor is a compelling government interest that justifies laws that prohibit child sexual abuse material. Viewing, producing and/or distributing photographs and videos of sexual content including children is a type of child sexual abuse.
This includes sending nude or sexually explicit images and videos to peers, often called sexting. Even if meant to be shared between other young people, it is illegal for anyone to possess, distribute, or manufacture sexual content involving anyone younger than 18. Even minors found distributing or possessing such images can and have faced legal consequences. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren’t physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit.
On the other hand, the government is asking for digital platforms to take responsibility for the impact of their technology. Jordan DeMay killed himself two years ago at the age of 17, just five and a half hours after he first made contact with a Nigerian man pretending to be a woman. DeMay, who was enticed into sending a naked photo of himself online, was threatened with having the images spread if he didn’t pay money.
No Comments »