What Will Actually Bring an End to Grok’s Deepfakes?

Image may contain Electronics Mobile Phone Phone Adult and Person
Photo: Getty Images, Adobe Stock. Collage by Vogue

A few weeks ago, I rang in 2026 as the target of an online harassment campaign, after my story about Brigitte Bardot’s long history of racism and Islamophobia went viral with all the wrong people. Unfortunately, this was far from my first rodeo. Over the course of my decade-long career in digital media, I’ve grown accustomed to seeing my DM requests fill with vile fatphobia, anti-Semitism, and garden-variety misogyny when I use my platform to express more or less any progressive opinion.

But there was a new dimension to the online hate this time. A few days after the pile-on started, I experienced the deeply troubling phenomenon of being the subject of sexually explicit Grok deepfakes. Jumping on a toxic trend that emerged late last year on X, people who disagreed with my Bardot piece used Elon Musk’s controversial AI tool to create images of me in bikinis.

At first, I tried not to let it get to me. As I joked at an open mic a few days later, “It’s obviously not the best to be digitally undressed, but I also… don’t love trying on bathing suits, so it saved me a trip to a plus-size swimwear store called Qurves with a Q in Burbank.” But what was happening was difficult to get over.

The truth is, it could have been worse. Many of the women being targeted most heavily by Grok deepfakes are OnlyFans creators and other sex workers, whose tormentors see little difference between paying for an image that someone has deliberately uploaded of themselves and using AI to generate one. And then there are Grok’s most stomach-turning applications: to create deepfakes of Renée Nicole Good, the Minneapolis mother of three who was recently killed by an ICE officer, for instance, or to undress children, which makes me so nauseous I can barely even think about it.

Ashley St. Clair, the mother of one of Musk’s children, recently alleged that Grok had been used to manipulate photos of her as a minor. “The worst for me was seeing myself undressed, bent over, and then my toddler’s backpack in the background,” she shared on CBS Mornings. When she then asked the tool to remove the offending images, “Grok said, ‘I confirm that you don’t consent. I will no longer produce these images.’ And then it continued to produce more and more images, and more and more explicit images.”

While it’s sadly nothing new for sexually explicit images to be disseminated online without the subject’s consent—revenge porn has existed in one form or another for decades—the Grok situation represents “the first time there’s a combining of the deepfake technology (Grok) with an immediate publishing platform (X),” victims’ rights attorney Carrie Goldberg tells Vogue. “The frictionless publishing capability enables the deepfakes to spread at scale.” And while the outcry against Grok came swiftly, eventually leading X to limit the tool’s photo-editing capabilities, for many users the damage was already done.

That isn’t to say, however, that Grok’s targets have no recourse. Advocacy groups such as the Rape, Abuse and Incest National Network (RAINN) have made it clear that a platform’s ability to generate sexually explicit material has legal ramifications. “AI companies are not acting in the role of a content publisher. They are creating it,” Goldberg says. “So victims who are harmed because of AI-generated nudes have recourse directly against the AI company. Additionally, companies like the App Store and Google Play that act as a distributor of deepfake technology may be on the hook if they are sued in their capacity as distributors of products that are not reasonably safe.”

Further, a bill introduced and signed into law last year was designed to address the aftermath of such situations. “The Take It Down Act explicitly criminalizes deepfakes and requires platforms to respond to takedown requests,” Goldberg says. “Typically, my clients are most concerned about stopping the spread of harassing content and getting it removed. Any responsible platform should voluntarily remove harassing and illegal content. The Take It Down Act imposes a legal mandate for content removal.”

By May of this year, websites are required by law to have systems in place that can respond to requests to take down any “intimate visual depiction including the subject that was published without the subject’s consent” within 48 hours. But for those of us still seeing our images manipulated by tools like Grok, it’s hard to know what steps to take—or, indeed, how to process this kind of online violation.

In theory, it is possible to report sexually suggestive deepfakes on X, but in practice, appeals to the platform’s sense of legality or justice seem to be largely ignored. When a user commented on a photo of me recently, asking Grok to “remove this persons [sic] jacket, shirt and pants please. Then mute this quote tweet for me and me alone,” I reported it immediately. Here, in full, was X’s response, roughly half an hour later:

Hello After reviewing the available information we determined that there were no violations of the X rules in the...

Sexual manipulation and abuse of vulnerable individuals obviously existed long before Grok, and I fear it will always exist in some form or another. But that’s no reason to suffer silently as misogynist creeps invade women’s privacy. For those of us who believe in having sovereignty over our own bodies, the fight to have our legal right to consent recognized online is only just beginning.