UK telecom regulator Ofcom has launched an investigation into X regarding the proliferation of sexually explicit deepfakes generated through Grok, the platform's AI tool. The probe centers on how the company's artificial intelligence system is being misused to create and distribute non-consensual intimate imagery. This regulatory action highlights growing concerns across major platforms about AI-generated sexual content and the need for stronger safeguards in content moderation policies.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
10 Likes
Reward
10
5
Repost
Share
Comment
0/400
LightningWallet
· 15h ago
Grok, this crappy tool really needs to be regulated. They only remember the existence of supervision after something goes wrong.
View OriginalReply0
PrivacyMaximalist
· 15h ago
Grok is really ridiculous. Using AI face swapping for this purpose is like shooting yourself in the foot... Only realizing after Ofcom stepped in?
View OriginalReply0
RadioShackKnight
· 15h ago
Grok is really a Pandora's box; why is it so difficult to regulate?
View OriginalReply0
BlockchainTherapist
· 15h ago
Grok really doesn't have any restrictions and can generate illegal content at will? Regulation came too late.
View OriginalReply0
GateUser-2fce706c
· 16h ago
I've said it before, AI regulation will inevitably be the high ground of the future. Ofcom's recent moves are actually just the trend of the times. Those who are still questioning it now are like those who questioned the need to regulate the internet back then—it's a narrow perspective.
UK telecom regulator Ofcom has launched an investigation into X regarding the proliferation of sexually explicit deepfakes generated through Grok, the platform's AI tool. The probe centers on how the company's artificial intelligence system is being misused to create and distribute non-consensual intimate imagery. This regulatory action highlights growing concerns across major platforms about AI-generated sexual content and the need for stronger safeguards in content moderation policies.