Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
【AI + Hardware】"Lobster" OpenClaw Triggers Shift in Hardware Demand, Will Memory Prices Continue to Rise? Morgan Stanley: Execution Requires More DRAM Than Thinking
Recently, OpenClaw has sparked a “lobster farming” craze. Morgan Stanley pointed out that AI agents represented by OpenClaw are driving changes in hardware demand. The AI bottleneck has shifted from computing power to data processing, requiring more DRAM (Dynamic Random Access Memory) to perform tasks rather than just thinking, leading to tighter DRAM supply.
The firm raised SK Hynix’s target price to 1.3 million Korean won and Samsung Electronics’ common stock target price to 251,000 Korean won, both maintaining an “overweight” rating.
The report predicts that memory prices will accelerate year-over-year, currently in the mid-phase of an upward trend. Specifically, by Q2 2026, the price of high-end DRAM DDR5 used for advanced computing is expected to surge over 50% quarter-over-quarter, while more widely used DDR4 is projected to increase by 30% to 40%. NAND eSSD products for servers could potentially double in price.
Hardware Bottleneck Shift and Tight DRAM Demand in AI “Autonomous Execution” Mode
Unlike generative AI like ChatGPT, which answers questions one at a time, OpenClaw functions more like an efficient team of assistants. It autonomously searches web information, calls external software tools, reads and analyzes documents, and even executes code, ultimately deriving complex outputs.
Morgan Stanley believes that multi-step coordination, tool invocation, and process orchestration shift the hardware bottleneck from GPUs (Graphics Processing Units) to CPUs (Central Processing Units) and memory. CPU computation times will slow down overall task execution. Additionally, multiple agents need to continuously share context, unload KV caches (Key-Value Cache), and store and retrieve intermediate results, heavily consuming DRAM space.
In traditional large language models (LLMs), GPU computing power was seen as the decisive bottleneck. CPUs only needed to convert tokens (a general term for AI computational resources or billing units) into text, and DRAM was mainly used for cache read/write tasks.