Latest Interview With Jensen Huang (Part 2): Why Doesn’t Nvidia Build Hyperscalers Itself?
In the second segment of an interview, Huang Renxun directly addressed the threat posed to NVIDIA by TPU and ASIC. He emphasized that what NVIDIA is building is not a single AI chip, but an accelerated computing platform, with the focus on integration across the entire ecosystem. Just like the chip war between the U.S. and China, the AI race isn’t about winning or losing at a single point—you have to look at whether the whole technology stack can grow stronger at the same time.
When faced with the criticism, “Since the essence of AI is massive matrix multiplication, why not let a more specialized TPU-like architecture take the lead?” Huang Renxun’s response was: matrix multiplication is important, but it isn’t everything in AI. From the new attention mechanism, hybrid SSM, diffusion, and autoregressive fusion, to model distributed execution and architectural innovation, progress in AI often comes from algorithmic innovation rather than simply pushing Moore’s law forward through hardware.
Since N