NVIDIA's Data Centers Are Built. Now Someone Has to Power Them.
- Jimmy Hayes

- 12 minutes ago
- 2 min read

Two massive data centers sit dark in Santa Clara, California — right in NVIDIA's backyard.
Digital Realty's 430,000 square foot facility. Another major project next door. Combined: nearly 100 megawatts of compute capacity. Both ready to go. Both empty.
The reason? The local utility can't supply the power. This isn't a chip problem. It's a power problem. And it's not just Santa Clara.
The Real Bottleneck in the AI Arms Race
The US grid interconnection queue has become a multi-year gauntlet. Developers filing today for new utility connections are looking at 3 to 5 year wait times — in some markets, longer. Meanwhile, AI compute demand is doubling every 18 months. The math doesn't work.
NVIDIA has invested heavily in the compute layer. Jensen Huang has been on stage repeatedly saying the infrastructure build-out is the defining challenge of this decade. He's right. But infrastructure isn't just servers and fiber. Infrastructure is power — and right now, the grid can't keep up.
What SoftBank Already Figured Out
When Masayoshi Son broke ground on his $33 billion, 10-gigawatt data center in Ohio, he said: "We will not take electricity away from the grid. We will generate the entire electricity use by ourselves." That's not a workaround. That's the new standard.
The Data Power Supply Difference
Data Power Supply was built for exactly this moment. We've been deploying grid-independent infrastructure — turbines, generators, BESS, microgrids, modular data centers — since before this problem had a name. When a site can't get utility power for three years, we can have a turnkey power solution operational in weeks.
No interconnection queue. No ratepayer burden. No 2-year permitting cycle. The chips are ready. The power needs to be too. If your AI buildout is waiting on the grid — it doesn't have to.
— Jimmy Hayes, Founder & CMO, Data Power Supply ⚡


Comments