Outrage Erupts as Nonconsensual Deepfake Images of Taylor Swift Go Viral on X
On Wednesday, explicit deepfake images of Taylor Swift went viral on X (formerly Twitter), leading to widespread outrage. The manipulated images, which falsely depict Swift in nude and sexual scenarios, amassed over 27 million views and more than 260,000 likes within just 19 hours. Despite the rapid spread, it took nearly a full day before the account responsible for the images was suspended.
These deepfakes have continued to proliferate on the platform, with users reposting the viral content. AI tools are often used to create such images, either by generating completely new, fake visuals or by “undressing” real images. The origin of these images remains unclear, but a watermark suggests they may have come from a website notorious for publishing fake nude photos of celebrities, including a section specifically dedicated to “AI deepfakes.”
Reality Defender, a company that specializes in AI-detection software, scanned the images and concluded that they were likely generated using AI technology. The widespread circulation of these images underscores the growing problem of AI-generated content and misinformation online. Despite the increasing severity of this issue, platforms like X have been criticized for their slow response in deploying tools to detect and remove harmful AI-generated content.
Fans Rally to Defend Taylor Swift Amid Deepfake Scandal on X
The most widely shared deepfakes depicted Taylor Swift nude in a football stadium, a disturbing development in a series of misogynistic attacks she has faced for supporting her partner, NFL player Travis Kelce. Swift acknowledged the backlash in a recent interview, noting, “I have no awareness of if I’m being shown too much and pissing off a few dads, Brads, and Chads.”
Despite X’s policies banning harmful manipulated media, the platform has been criticized for its slow or ineffective responses to sexually explicit deepfakes. Earlier this year, a 17-year-old Marvel star publicly spoke out about finding explicit deepfakes of herself on X and struggled to have them removed. Even after media inquiries, only some of the offensive material was taken down.
According to some of Swift’s fans, the removal of the deepfake images wasn’t driven by X or Swift herself but was the result of a coordinated mass-reporting campaign by her supporters. After “Taylor Swift AI” began trending on X, fans flooded the hashtag with positive posts about her, leading to the trend “Protect Taylor Swift” also gaining traction.
One fan, who was involved in the reporting campaign, shared screenshots showing that her reports led to the suspension of two accounts for violating X’s “abusive behavior” policy. This fan, who chose to remain anonymous, expressed growing concern over the impact of AI deepfake technology on women and girls, stating, “They don’t take our suffering seriously, so now it’s in our hands to mass report these people and get them suspended.”
The incident has reignited discussions around the need for stronger regulations against nonconsensual sexually explicit deepfakes. Currently, there is no federal U.S. law governing their creation and distribution. Rep. Joe Morelle (D-N.Y.), who introduced a bill in May 2023 to criminalize such deepfakes, highlighted the issue on X, calling it “yet another example of the destruction deepfakes cause.”
Carrie Goldberg, a lawyer who has long represented victims of nonconsensual explicit material, criticized tech companies for failing to enforce deepfake policies effectively. “Most human beings don’t have millions of fans who will go to bat for them if they’ve been victimized,” Goldberg stated. She emphasized that technology itself holds the key to resolving this issue, urging platforms to implement AI tools capable of identifying and removing harmful content quickly, ensuring there’s “no excuse” for these images to spread unchecked.