Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
The most important thing today is Nvidia's GTC conference, practically an AI version of a brief history of humankind.
Today’s biggest event is NVIDIA GTC, basically an AI version of A Brief History of Humankind.
Jensen Huang hasn’t even taken the stage yet, but the pre-release information alone is enough to fill a book.
Tonight, I’ve summarized three main highlights. Let’s go, friends, follow me.
The previous generation Blackwell was already impressive, right? Soon, the new Vera Rubin chip will go into mass production.
What makes Vera Rubin so powerful? Simply put, two words: cheap.
Running the same AI models, chip count reduced to a quarter, inference computation costs cut by 90%. Ninety percent reduction, friends. AWS, Microsoft, and Google’s top cloud providers are already on board.
Previously, Jensen Huang said at the earnings call that Groq would be integrated into NVIDIA’s ecosystem as an expansion architecture, just like Mellanox was acquired to enhance networking capabilities.
Groq’s LPU (Low Power Unit) and NVIDIA GPUs are housed in the same data center—GPUs handle understanding problems, while LPUs quickly produce answers.
This division of labor between the two chips, working together, directly reduces latency in AI agent scenarios.
AI agents do tasks for people—each task might require dozens of model adjustments, burning inference power each time, and users are waiting. A slower experience could cause a crash.
Inference involves two steps: first understanding your question, then generating the answer word by word.
GPUs excel at the first step, but for the second—producing words quickly and reliably—Groq’s LPU is better.
Is 20 billion dollars expensive?
Think about it—every company in the future will run hundreds of agents, each adjusting models thousands of times a day.
It’s an open-source platform that companies can deploy to have AI employees handle workflows, data processing, and project management. Rumor has it they’re already talking with Salesforce and Adobe.
What’s interesting is that NemoClaw doesn’t require NVIDIA chips. Think about this logic. Selling chips only earns hardware-level profits; setting the rules allows earning from the entire chain. Jensen Huang has a clear grasp of this.
Most likely, the next-generation architecture, Feynman, will make its debut, with mass production expected in 2028 using TSMC’s most advanced 1.6nm process.
There’s also a lesser-known rumor I find quite interesting.
NVIDIA is releasing laptop processors—two models, aimed at gaming. This means graphics card makers are now competing for CPU market share.
Tonight, I feel Jensen Huang is destined to become a legendary figure.