
A new AI tool by Elon Musk’s company xAI, Grok Imagine’s “spicy mode,” unintentionally generated a deepfake video of Taylor Swift in a topless dancing scene—prompting immediate ethical and legal concerns.
At a Glance
- Grok Imagine’s “spicy” preset produced an explicit six-second video of Taylor Swift without any nudity prompt.
- A journalist generated the clip using only a benign prompt, exposing the tool’s weak moderation controls.
- In January 2024, AI-generated explicit images of Swift caused widespread backlash and calls for legislative action.
- U.S. lawmakers are advancing the Take It Down Act to mandate rapid removal of non-consensual sexual deepfakes.
- Experts warn that current laws may not apply if AI-generated explicit content remains private to the creator.
A Preset That Goes Too Far
Grok Imagine allows users to create short videos from text prompts using presets labeled Custom, Normal, Fun, and Spicy. In early August 2025, testing by a technology journalist revealed that selecting “spicy” with an innocuous prompt—“Taylor Swift celebrating Coachella with the boys”—produced a topless deepfake video. No explicit request was made, indicating that the preset itself can insert sexual content without user direction.
The discovery has raised questions about xAI’s internal safeguards and whether such modes should exist without strong age verification or explicit-content filters. Critics argue that “spicy mode” effectively bypasses typical user responsibility, shifting the onus of preventing harm onto the platform’s default settings.
Watch now: Grok AI Spicy Mode Controversy: Taylor Swift Deepfakes?! · YouTube
A Troubling Pattern
This incident echoes a prior controversy from January 2024, when AI-generated explicit images of Taylor Swift circulated widely on social media. That earlier wave of deepfakes spurred condemnation from advocacy groups and renewed debate about regulating AI-generated intimate content. Grok Imagine’s more advanced capabilities, combined with its high adoption rate—reportedly generating 34 million images in one day—have heightened concerns about both scale and impact.
Some observers note that even if platforms remove public uploads, AI tools that create private explicit content could still be used for harassment, blackmail, or reputational harm. This dynamic complicates enforcement and undermines the effectiveness of content-removal laws that focus solely on distribution.
Legal Response Moving at Pace
The proposed Take It Down Act would require online platforms to remove non-consensual sexual deepfakes quickly after a complaint and allow victims to sue for damages. However, legal analysts point out that if explicit AI content is only visible to the creator, enforcement may be impossible under current drafting.
Policymakers face the challenge of adapting laws to cover both public and private use cases without overreaching into legitimate creative expression. Meanwhile, platform operators like xAI may face reputational damage if perceived as unresponsive to legitimate harm caused by their tools.
What’s Next for AI Oversight?
The Grok Imagine controversy highlights a gap between AI’s capabilities and existing governance mechanisms. Regulators, advocacy groups, and industry leaders are now confronting several key questions: should AI tools be restricted from generating sexual content of real individuals entirely? How can systems detect and prevent harmful output without stifling innovation?
The answers could set precedents for the broader AI industry, shaping both technology design and the legal landscape for years to come.
Sources



























