Huang Jen-hsun is Satoshi Nakamoto

The token you once saw because you believed in it now can be seen without belief. It is the next after Watt, Ampere, and Bit.

In January 2009, an anonymous person invented something called a “token.” You invest computing power to earn tokens, which circulate, price, and trade within a consensus network. This gave birth to the entire crypto economy. Over a decade later, people still debate whether these tokens have value.

In March 2025, a man in leather redefined another kind of token. You invest computing power to produce tokens, which are immediately consumed during AI inference and reasoning processes: thinking, reasoning, coding, decision-making. This accelerates the AI economy. No one debates whether these tokens have value because you just used millions of them this morning.

Two types of tokens, same name, same underlying structure: input computing power, output valuable things.

In March 2026, I sat at NVIDIA GTC listening to Huang Renxun deliver a nearly non-promotional keynote. Yes, he announced Vera Rubin, a product combining CPU and GPU. But this time, he didn’t talk about chip specs or process technology; he presented a complete economics of token production, pricing, and consumption—

Which model corresponds to which token speed; which token speed corresponds to which price range; what hardware level supports each price range.

He even provided data center hardware allocation plans for CEOs and decision-makers holding corporate budgets: 25% for free tier, 25% for mid-range, 25% for high-end, 25% for premium.

Yes, he didn’t sell a specific GPU model this time, just like two years ago with Blackwell. But this time, he was selling something bigger. After two hours, I think the one thing he most wanted to say was: Welcome to consume tokens, and only Nvidia’s factory could produce.

At that moment, I realized that this man, and the person who mined the first token 17 years ago, are doing the exact same thing structurally.

Same Conversion Rules

The anonymous figure known as “Satoshi Nakamoto” wrote a nine-page white paper in 2008, designing a set of rules: invest computing power to complete a mathematical proof (Proof of Work), and earn crypto tokens as rewards.

The brilliance of this rule is that it requires no trust—if you accept these rules, you automatically participate in this economy. The rule is correct, after all, it brought together so many schemers and fraudsters.

And Huang Renxun, on the GTC 2026 stage, did something structurally identical.

He showed a diagram illustrating the relationship and tension between inference efficiency and token consumption: Y-axis is throughput (how many tokens produced per megawatt of power), X-axis is interactivity (perceived token speed per user). Below the X-axis, he marked five pricing tiers: Free with Qwen 3, $0/million tokens; Medium with Kimi K2.5, $3/million tokens; High with GPT MoE, $6/million tokens; Premium with GPT MoE 400K context, $45/million tokens; Ultra at $150/million tokens.

This diagram could almost serve as the cover of Huang Renxun’s “Token Economics” white paper.

Satoshi Nakamoto defined “valuable computation”—completing a SHA-256 hash collision is valuable. Huang Renxun defined “valuable inference”—producing tokens at a specific speed under given power constraints for specific scenarios is valuable.

Neither Satoshi nor Huang directly produce tokens; they define the rules and mechanisms for token creation and pricing.

A sentence Huang said on stage could almost be directly included in the abstract of the white paper on token economics—

Tokens are the new commodity, and like all commodities, once they reach an inflection point, once they mature, they will segment into different parts.

Tokens are the new bulk commodity. Once mature, bulk commodities naturally stratify. He’s not describing the current state; he’s predicting a market structure and precisely aligning his hardware product lines within each layer of this structure.

The production processes of these two tokens even have a semantic symmetry: mining is called mining, inference is called inference.

The essence of mining and inference is both turning electricity into money. Miners spend electricity to mine crypto tokens, then sell them; AI models and agents spend electricity to generate AI tokens, then sell them by the million. The intermediate steps differ, but both ends are the same: on the left is the electricity meter, on the right is income.

Two Ways to Write Scarcity

The most important design decision Satoshi Nakamoto made wasn’t Proof of Work, but the 21 million cap on Bitcoin. He used code to create artificial scarcity—no matter how many miners flood in, the total Bitcoin supply will never exceed 21 million. This scarcity anchors the entire crypto economy’s value.

Huang Renxun, on the other hand, creates natural scarcity through physical laws. He says—

“You still have to build a gigawatt data center. You still have to build a gigawatt factory, and that one gigawatt factory for 15 years amortized… costs about $40 billion even with nothing on it. It’s $40 billion. You’d better make sure you put the best computer system on that thing so you can have the lowest token cost.”

A 1GW data center will never become 2GW. This isn’t a code limit; it’s a physical law.

Land, electricity, cooling—each has a physical limit. The amount of tokens a $40 billion factory can produce over 15 years depends entirely on the computing architecture you put inside.

Satoshi’s scarcity can be forked. If you don’t like the 21 million cap, fork a new chain, change it to 200 million, call it Ether or whatever, and issue a new white paper. And people have indeed done this, happily.

But Huang’s created scarcity cannot be forked. You can’t fork the second law of thermodynamics, the capacity of a city’s power grid, or the physical land area.

Yet, whether it’s Satoshi or Huang, their creation of scarcity leads to the same result: a hardware arms race.

The history of mining: CPU → GPU → FPGA → ASIC. Each generation of dedicated hardware renders the previous obsolete. The history of AI training and inference is repeating: Hopper → Blackwell → Vera Rubin → Groq LPU. General hardware starts it, specialized hardware settles it. The Groq LPU Huang showcased at this year’s GTC, after acquiring Groq, is a deterministic dataflow processor—static compilation, no dynamic scheduling, 500MB on-chip SRAM—it’s architecturally an ASIC for inference. Doing one thing, but doing it to the extreme.

Interestingly, GPUs have played a key role in both waves.

Around 2013, miners discovered GPUs were better suited than CPUs for crypto mining, and Nvidia graphics cards were sold out. Ten years later, researchers found GPUs are optimal for training and inference of AI models, and Nvidia data center cards were again sold out. As a processor class, GPUs served two generations of the token economy.

The difference is, the first time Nvidia benefited passively, and then it was over. The second time, as AI compute shifted from pretraining to inference, Nvidia quickly seized the opportunity, designing the entire game and becoming the rule-maker of AI’s “mining” economy.

The Most Profitable Shovel in the World

In the gold rush, the most profitable isn’t the prospector, but the shovel seller—Levi Strauss. In the mining boom, the most profitable aren’t miners, but the makers of mining hardware—Bitmain and Wu Jihan. In AI pretraining and inference waves, the most profitable aren’t the base models or agents, but Nvidia’s GPU sales.

But honestly, the roles of Bitmain and Nvidia in their respective industries are no longer comparable.

Bitmain only sells mining machines; Nvidia was once a supplier to Bitmain. When you buy a mining rig, what coin to mine, which pool to join, and at what price to sell are unrelated to Bitmain. It’s a pure hardware supplier, earning one-time equipment profits.

Nvidia is different. It doesn’t just sell hardware. Since the AI inference boom in 2025, it has deeply defined what to mine with this GPU, how to price tokens, who to sell tokens to, and how to allocate data center compute—these are all in Huang’s presentation slides: market divided into five tiers, each corresponding to specific models, context lengths, interaction speeds, and prices… Nvidia has standardized and formatted the future AI inference-driven market.

Around 2018, global compute power was concentrated in a few large pools—F2Pool, Antpool, BTC.com—they competed for hash rate share, but the hardware source was highly centralized at Bitmain.

Just like today’s Nvidia, which earns 60% of its revenue from competing hyperscalers like AWS, Azure, GCP, Oracle, CoreWeave, and 40% from decentralized AI natives, sovereign AI projects, and enterprise clients. The big “mining pools” contribute most revenue, while smaller “miners” provide resilience and diversification.

The structure of these two ecosystems is identical. But Bitmain later faced competitors—Shenma Mining Machines, Core Technology, Canaan—eating into its market share. Mining hardware is a relatively simple ASIC design, giving challengers a chance. But shaking Nvidia’s dominance seems increasingly difficult: 20 years of CUDA ecosystem, hundreds of millions of installed GPUs, NVLink sixth-generation interconnect, the decoupled inference architecture after Groq’s integration—Nvidia’s technological complexity and ecosystem barriers make most competing tools ineffective.

This could last another 20 years.

The Fundamental Fork of the Two Tokens

What makes cryptocurrency and AI tokens fundamentally different is the motivation and psychology behind their use.

Crypto tokens are driven by speculation. No one “needs” Bitcoin to do work. All white papers claiming blockchain tokens can solve problems are scams. Holding crypto is because you believe someone will buy it from you at a higher price in the future. Bitcoin’s value comes from a self-fulfilling prophecy: if enough people believe it’s valuable, it is. This is a faith economy.

AI tokens, on the other hand, are driven by productivity. Nestlé needs tokens for supply chain decisions—its supply chain data refreshes every 15 minutes now, down to 3 minutes, reducing costs by 83%. This value can be directly mapped to P&L. Nvidia’s engineers now need tokens to write code instead of manual work; research teams need tokens for scientific research. You don’t need to believe tokens are valuable; just use them, and their value is self-evident in usage.

This is the core difference between the two tokens. Crypto tokens are produced to be held and traded—their value lies in not being used. AI tokens are produced to be consumed immediately—their value lies in their use at the moment of consumption.

One is digital gold, accumulating value as stored; the other is digital electricity, burned upon production.

This difference determines that the AI token economy won’t bubble like the crypto economy. Bitcoin’s wild swings are driven by speculation. Token prices are driven by usage and production costs; as long as AI remains useful—people still code with Claude Code, write reports with ChatGPT, run business workflows with agents—the demand for tokens won’t collapse. It’s not based on faith, but on indispensability.

In 2008, the Bitcoin white paper repeatedly questioned why a decentralized electronic cash system would be valuable. Seventeen years later, people still debate it.

In 2026, token economics have caused no controversy; they are even accepted as consensus without debate. When Huang Renxun at GTC said “tokens are the new commodity,” no one questioned. Because everyone in the audience had just used millions of tokens with Claude Code or ChatGPT this morning. They don’t need convincing of token value—their credit card bills already prove it.

In this sense, Huang Renxun is truly a copy of Satoshi Nakamoto—the one who monopolized mining hardware, defined token use cases and standards, and annually hosted a show at San Jose’s SAP Center to showcase the next-generation “mining machines” supporting AI training and inference.

Satoshi Nakamoto has a cautious, romantic allure—design the rules, hand them to code, then disappear. That’s the cyberpunk ideal. Huang, by contrast, is more like a businessman—design the rules, maintain them personally, constantly improve, and build his moat.

The token you once saw because you believed in it now can be seen without belief. It is the next after Watt, Ampere, and Bit.

Source: Silicon Position

Risk Disclaimer

Market risks are present; invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their circumstances. Invest accordingly at your own risk.

BTC-5.44%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin