Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Nvidia's Earnings Report "Explodes," Jensen Huang: AI Inflection Point Has Arrived
NVIDIA reports a record-breaking financial statement, attempting to counter external doubts about the AI bubble. After the US stock market closed on February 25, NVIDIA announced its latest earnings, with revenue and profit both rising by double digits to hit new highs. Amid market concerns over the AI bubble and cloud service providers’ data center capital expenditures, NVIDIA CEO Jensen Huang stated during a conference call that Agentic AI has reached a turning point, with computing power directly translating into revenue. He also previewed the upcoming GTC2026 conference, confirming that a “world-first” new chip will be unveiled there.
Earnings of 2.2 Billion Yuan per Day
The financial report shows that in Q4, NVIDIA’s revenue reached a record $68.127 billion, up 73% from $39.331 billion a year earlier; net profit was $42.96 billion, up 94% from $22.091 billion. Additionally, NVIDIA’s full-year revenue for last year was $215.938 billion, with a net profit of $120.067 billion, equivalent to earning about $3.28 billion (roughly 22 billion RMB) per day.
Focusing on core business lines, NVIDIA’s data center segment contributed $193.48 billion in revenue for the full year, a 68% increase, making it the main profit driver. In Q4, the data center segment performed best, with revenue reaching $62.3 billion, up 75% year-over-year and 22% quarter-over-quarter, accounting for over 91% of total company revenue. Breaking it down further, “computing” (mainly GPU products) contributed $51.3 billion, up 58%, and “networking” brought in $11 billion, up 263%.
This indicates that both high-end GPUs used for large model training and high-speed network components supporting large clusters are in growth. In terms of computing deployment, besides the well-known Blackwell series chips, NVIDIA’s interconnected products such as NVLink architecture, Ethernet, and InfiniBand platforms are also rapidly developing.
Some data disclosed in NVIDIA’s earnings exceeded market expectations. For FY2027, NVIDIA’s guidance remains optimistic, with first-quarter revenue projected at $78 billion, again surpassing analyst estimates. However, the Chinese market remains uncertain; the report notes that so far, NVIDIA has not generated any revenue from the H200 licensing project.
NVIDIA CFO revealed that throughout the upcoming year, NVIDIA will continue to sell Blackwell chips and also sell chips based on the Rubin architecture. Regarding gaming, they hope memory supply will improve by the end of this year; currently, supply remains tight for the next few quarters. In automotive and robotics, robotaxi services are growing, with some companies expected to expand their fleets by millions over the next decade.
“Compute Power Equals Revenue”
Recently, Wall Street has been concerned that high capital expenditures by US tech giants could trigger chain reactions. Data from some tech giants shows that market analysts believe Google, Microsoft, Meta, and Amazon will collectively invest nearly $700 billion this year to expand AI, with capital spending over 60% higher than in 2025. A recent survey by U.S. banks indicates that high capital expenditure by AI cloud service providers is seen as the second systemic credit risk.
U.S. bank analysts said that a February survey of investors trading securities showed that 23% of respondents see threats from the AI bubble as their biggest concern, up from 9% in December last year. Based on last year’s large debt issuance by US tech firms, some investors worry that cloud providers like Amazon and Meta will issue more bonds to fund AI expansion.
On one hand, the high capital expenditure on AI infrastructure benefits NVIDIA, but investors also worry that if tech companies slow down investments, NVIDIA could be significantly impacted. Cantor Fitzgerald analysts stated last week that the logic is simple but somewhat complex: despite endless demand for computing power and very positive performance prospects for NVIDIA, concerns remain that large-scale data center capital spending may peak this year.
During the conference call, an analyst asked Jensen Huang how client capital expenditure might impact cash flow and the possibility of reaching a peak. He responded that enterprise adoption of intelligent agents is soaring, and he is confident in the cash flow growth of top clients. “The meaning of computing has changed. In the past, software ran on computers, with about $300-400 billion annually spent on buying moderate amounts of computers. Now, the demand has shifted to AI, where generating tokens requires massive computing power, directly translating into growth and revenue. The turning point for Agentic AI has arrived.”
Huang remains optimistic that global data center capital expenditure will reach $3 trillion to $4 trillion by 2030, stating, “AI will not regress; AI will only get better.” Yue Yang, Chief Analyst at Chuang Securities Electronics, believes that NVIDIA’s latest results far exceeded expectations, and Huang’s positive comments about the “AI agentic turning point” greatly boosted global confidence in AI computing power prospects. As the core of AI hardware, the strong demand for NVIDIA chips will directly drive the prosperity of upstream hardware industries.
New Chips to Debut
From the outside, NVIDIA has secured a dominant position in the AI era through computing power supremacy. However, Huang aims to bring everything onto NVIDIA’s platform: “As we continue to build a complete NVIDIA AI ecosystem—covering AI, physical AI, AI physics, life sciences, biology, robotics, and manufacturing—we hope these ecosystems will be built on NVIDIA’s platform,” he said.
To this end, NVIDIA is strengthening its moat through capital measures. Huang mentioned that NVIDIA is close to reaching an agreement with OpenAI. This collaboration was initially outlined last year as a potential $100 billion AI infrastructure project. Huang described OpenAI as a “once-in-a-generation” company. Additionally, NVIDIA acquired the AI inference chip startup Groq’s technology licensing for about $20 billion late last year, bringing the core team onboard to complete the inference computing puzzle.
Meanwhile, the highly anticipated GTC2026 developer conference is countdown, scheduled to open on March 15 in San Jose, California. Huang confidently stated in a media interview, “We have prepared several unprecedented new chips. All the technology has reached its limit.”
Huang did not reveal the model of the so-called new chips. Some media speculate they are likely from two major series: one is a derivative of the Rubin series (such as the previously exposed Rubin CPX), which was showcased at CES 2026, including six new-designed chips now in full production; the other is the next-generation Feynman series, described as a “revolutionary” product.
However, challengers are never absent. Custom AI chips like Google’s TPU and Amazon’s Inferentia are challenging the general-purpose GPU’s position within data centers. TrendForce reports that, besides ongoing procurement of NVIDIA and AMD GPUs, companies are expanding their use of ASIC infrastructure to ensure the applicability of AI applications and optimize data center construction costs. TrendForce predicts that in 2026, ASIC AI server shipments will account for 27.8%, the highest since 2023, with shipment growth surpassing that of GPU AI servers.