Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Yann LeCun Says He Trolled OpenAI Over GPT-2 Withholding
Headline
Yann LeCun Says He Trolled OpenAI Over GPT-2 Withholding
Summary
Yann LeCun, Meta’s Chief AI Scientist, recently admitted on Twitter that he mocked OpenAI back in 2019 when they initially refused to release the full GPT-2 model. OpenAI had argued the model was too dangerous—it could generate convincing fake news—so they released it in stages, starting with a smaller version in February 2019 and finally publishing the full 1.5B-parameter model in November after partners reported minimal actual misuse. LeCun’s tweet is a reminder that the AI community has been arguing about openness versus caution for years, and those arguments haven’t gone away.
Analysis
OpenAI’s GPT-2 concerns weren’t baseless. Studies at the time found people rated its synthetic text as credible about 72% of the time, and detection was genuinely difficult. Their staged release approach became something of a template for how labs handle powerful models. LeCun has long been skeptical of what he sees as overblown safety concerns, preferring open research that lets everyone build countermeasures. His 2024 tweet revisiting the “too dangerous” framing fits that pattern.
The timing matters. This came up again in 2026 while debates about AI regulation are intensifying. The GPT-2 episode keeps getting referenced because it crystallizes a real tension: does restricting access actually prevent harm, or does it just slow down the people trying to defend against misuse? There’s no clean answer, which is probably why we’re still talking about it seven years later.
I couldn’t find LeCun’s specific 2019 trolling tweets in searches, but his general stance is well-documented across OpenAI’s own posts, MIT Technology Review, and The Verge.
Impact Assessment