Sexually explicit deepfakes are everywhere in high schools.
Matteo Wong from the Atlantic about the power of AI:
This power has brought with it a tremendous dark side that many experts are only now beginning to contend with: AI is being used to create nonconsensual, sexually explicit images and videos of children. And not just in a handful of cases—perhaps millions of kids nationwide have been affected in some way by the emergence of this technology, either directly victimized themselves or made aware of other students who have been.
Kids as young as 9 are being exposed to this type of material:
Today’s report joins several others documenting the alarming prevalence of AI-generated NCII. In August, Thorn, a nonprofit that monitors and combats the spread of child-sexual-abuse material (CSAM), released a report finding that 11 percent of American children ages 9 to 17 know of a peer who has used AI to generate nude images of other kids.
The amount of AI-generated CSAM is also underreported:
Although the number of official reports related to AI-generated CSAM are relatively small—roughly 5,000 tips in 2023 to the National Center for Missing & Exploited Children, compared with tens of millions of reports about other abusive images involving children that same year—those figures were possibly underestimated and have been growing. It’s now likely that “there are thousands of new [CSAM] images being generated a day,” David Thiel, who studies AI-generated CSAM at Stanford, told me. This summer, the U.K.-based Internet Watch Foundation found that in a one-month span in the spring, more than 3,500 examples of AI-generated CSAM were uploaded to a single dark-web forum—an increase from the 2,978 uploaded during the previous September.
Most of these victims naturally are female, which should not be a surprise.
Share this with Muslim parents (heck, all parents), who still think public schools are safe and OK, especially for their daughters.