A teen deepfake victim is now advocating against AI abuse. The teenager who recently became a victim of AI- abuse through creating a nonconsensual sexually explicit photo of herself is joining Rep. Joe Morelle in passing a law against fake images created with AI.
The bill, called the Preventing Deepfakes of Intimate Images Act, will criminalize people spreading nonconsensual, explicit fake images of other people on a federal level.
Preventing Deepfakes of Intimate Images Act
The spotlight turned to this urgent issue when a teenage victim, supported by Rep. Joe Morelle, D-N.Y., called for decisive legislative action against nonconsensual sexually explicit deepfakes. The bill advocating for AI abuse, "Preventing Deepfakes of Intimate Images Act," was introduced in May and is currently under review by the House Judiciary Committee.
This bill will empower victims of AI-generated fake images. It is stated in the proposed bill that victims will be able to sue and keep the creators and distributors of the materials legally and criminally accountable.
Additionally, this also assures that victims can secure their personal dignity by giving them the power to sue while maintaining their anonymity.
The U.S. currently lacks a unified federal law addressing this issue, leaving victims to navigate a confusing array of state laws. Unfortunately, nonconsensual, explicit, and fake images generated by AI is not a new case.
This legislative gap has resulted in numerous victims coming forward, including Francesca Mani, a New Jersey high schooler.
Mani's harrowing experience, where deepfakes of her and over 30 other girls were created and circulated by male classmates, underscores the disturbing reality of this technology's abuse.
Read Also: Utah's Strict Social Media Limitation on Minors Ignites Legal Action Against X, TikTok, Meta
Battling AI Deepfake Photos
Deepfakes, which involve using AI to superimpose individuals' faces onto pornographic materials, have seen a worrying surge alongside the increased accessibility of deepfake-generating tools and dedicated websites.
Francesca Mani's powerful testimony at the news conference highlighted the emotional toll of being a fake victim. Her call for legal protection reflects a growing consensus on the need for robust laws against such violations.
Rep. Morelle, emphasizing the widespread nature of this issue, pointed out that deepfakes affect women across the country, not just celebrities.
The involvement of Rep. Tom Kean, R.-N.J., as the first Republican co-sponsor of the bill, along with his introduction of the AI Labeling Act of 2023, marks a bipartisan recognition of the seriousness of the issue.
Despite the urgent need for legislation, the "Preventing Deepfakes of Intimate Images Act" has yet to see significant progress in Congress. This lack of movement raises concerns about the potential misuse of deepfake technology, especially with the approaching 2024 election cycle.
Legal expert Mary Anne Franks, who contributed insights for the bill, warns that deepfakes have already targeted female politicians, indicating a broader threat to democratic processes.
Franks' statement at the news conference warned parents, women, and children that the AI threat is around the corner, which makes it necessary to have some urgency in this bill. This legislation shines light on an evolving sector in technology where women, children, boys, or girls need to be protected from being abused.
The proposed federal legislation to combat nonconsensual sexually explicit deepfakes represents a critical juncture in our ongoing struggle to align technological advancements with ethical and legal standards.