Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
The UK's communications watchdog is now formally investigating X following growing concerns over sexually explicit images created through the platform's AI tool Grok. This move marks another chapter in the mounting international scrutiny surrounding AI-generated content moderation.
The investigation reflects broader tensions between social platforms and regulators worldwide. As AI image generation becomes more sophisticated, authorities are increasingly cracking down on how these tools are deployed and whether platforms maintain adequate safeguards.
Grok, X's in-house AI system, has drawn particular attention for its capability to generate explicit imagery with minimal restrictions. The UK's decision to probe the matter signals that regulators are no longer willing to overlook such issues, especially when user safety and content standards are at stake.
This isn't happening in isolation. Global regulators are tightening their grip on tech platforms, demanding clearer accountability around AI outputs and content moderation policies. For X and similar platforms offering AI features, the pressure is mounting to demonstrate responsible deployment—or face potential restrictions.