CoreWeave CEO Michael Intrator was interviewed by All-in Podcast during the NVIDIA GTC conference, sharing the journey from transforming a computational hedge fund into a professional infrastructure provider, analyzing how they initially leveraged GPUs for financing, shifting toward professional AI model development, and discussing the current energy supply challenges in AI development and the future prospects of AI under capital effects.
Why did CoreWeave shift from miners to a professional infrastructure provider?
CoreWeave’s origins are not in traditional cloud services but in a computational hedge fund focused on natural gas. In its early days, the team used GPUs to mine Bitcoin and Ethereum. After experiencing the crypto winter, they gradually transitioned into GPU infrastructure providers. CoreWeave developed CGI rendering projects to help animation creators render images, then moved into batch computing. Around 2020 to 2021, they began to seriously explore how to develop neural network models using GPUs.
Intrator stated that the company’s competitive advantage lies in providing specialized solutions, with service levels between NVIDIA hardware and AI models. Unlike general-purpose, large-scale data centers like AWS, CoreWeave chooses not to compete directly with them but focuses on high-efficiency, dedicated computing resources to meet AI developers’ hardware performance needs.
How does innovative financing address massive capital expenditures?
Facing high hardware procurement costs, CoreWeave pioneered a loan model using GPUs as collateral, directly tying debt structures to long-term customer contracts. Intrator explained that this mechanism ensures cash flow prioritizes data center operations, electricity costs, and debt interest, with remaining funds flowing back to the company. This innovative capital operation enabled CoreWeave to raise approximately $35 billion in 18 months, demonstrating strong financial agility.
How does CoreWeave address GPU shortages?
Leveraging its status as a long-term strategic partner with NVIDIA, CoreWeave can quickly deploy the latest architectures (such as H100, H200, and GB200) at scale for commercial use. Additionally, the company secured massive long-term contracts and established specialized financing mechanisms, allowing it to acquire hardware at unprecedented speeds.
GPU energy consumption becomes a major bottleneck for AI expansion
Intrator highlighted a key observation: the main constraint in expanding AI infrastructure is no longer chip capacity but power supply. High-performance GPUs significantly increase data center energy consumption. CoreWeave reports its power usage has reached 4.5 gigawatts, equivalent to the annual electricity consumption of the entire San Francisco Bay Area. Due to the high power density required by GPU clusters, traditional infrastructure struggles to support this, prompting the industry to seek new energy solutions. To sustain growth, future data centers will increasingly be built near nuclear or other clean energy sources to meet the irreversible energy demands of AI and robotics.
The capital effect of AI offers a bright future
Intrator stated that AI lowers the capital threshold for companies. For example, a million tokens initially sold for just over $32, now cost only 9 cents for a million tokens. If you have a brilliant idea, you can open a model, tell your model to simulate with code, and do all sorts of things—creating things that never existed before. AI opens a whole new realm, breaking through many previous limitations.
This article, where CoreWeave CEO shares GPU leverage strategies and the ideal future under AI capital effects, first appeared on Chain News ABMedia.