Digi Power X pivots from crypto to AI with modular Tier III B200-powered data centers and a neocloud GPU service.
Digi Power X is transitioning its U.S. power and crypto infrastructure into Tier III modular AI data centers, has completed its first NVIDIA B200 GPU cluster in Alabama, and plans to launch a GPU-as-a-Service platform called NeoCloudz in Q1 2026. The company is standardizing on its ARMS 200 modular data center design, partnering with Supermicro for global distribution, and targeting up to 195 MW of AI capacity by 2027.
Analysis:
This is the classic crypto-to-AI conversion play, but with a more serious data center story than most miners. ARMS 200 is not just some containers in a field. Tier III, liquid cooling, and multi-MW modular blocks put this closer to real enterprise-grade AI colo than to opportunistic GPU shacks.
The B200 cluster in Alabama matters for two reasons. First, it signals they have early access to NVIDIA’s next-gen silicon and are positioning themselves as a B200-ready landing zone. Second, it shows they are designing for training and large-scale inference, not just small batch AI jobs. That aligns with where real enterprise and research workloads are going.
The Supermicro alignment is smart. Digi Power X does not need to be a server OEM. Plugging into Supermicro’s enterprise channel turns ARMS from “our sites only” into a product that can be deployed globally, including for sovereign AI and government builds that want validated, repeatable Tier III pods. That is how you move from a single-operator miner to a modular AI infrastructure vendor.
NeoCloudz is clearly a neocloud move. GPU-as-a-Service on top of their own Tier III hardware lets them sell capacity both as bare infrastructure and as a consumption model. If they execute, they can become a regional AI neocloud for customers who want GPU access with more control and locality than the hyperscalers, but more abstraction than raw colo.
The power roadmap is the real signal. Nearly 200 MW available now and a 2026 ramp from 5 MW to 55 MW, with a target of 140 MW of Tier III AI critical power by 2027, drops them squarely into the AI data center buildout. These are U.S. sites in Alabama and New York, with North Carolina planned, which positions them well for U.S.-centric and potentially sovereign or regulated workloads that need domestic, controlled facilities.
Capital structure is still miner-like. A meaningful portion of their liquidity is in BTC and ETH, with an at-the-market equity program funding the buildout. That is fine for capex in this cycle, but it is volatile. Enterprises and governments will care more about long-term operational stability than how many BTC they hold, so they will need to show stable financing and long-lived contracts, not just speculative balance sheet assets.
From a customer perspective, this looks like an attractive option for:
- AI startups priced out of hyperscaler GPUs,
- Enterprises experimenting with AI that want dedicated B200 clusters without full on-prem build,
- U.S. public sector and regulated buyers that want Tier III and domestic hosting but do not want to build their own AI data center.
The Big Picture:
This sits at the intersection of several macro trends.
AI data center construction surge: Digi Power X is a textbook case of “power first, data center second.” They already control nearly 200 MW and are refitting it from mining to Tier III AI. That is exactly what we are seeing across ex-crypto and industrial power operators turning into AI landlords. The presence of liquid cooling and Tier III design shows that the industry is standardizing on high-density, resilient AI-specific builds rather than retrofitted generic colo.
GPU availability and supply chain: B200 deployment in Alabama illustrates how the GPU arms race is filtering beyond hyperscalers. NVIDIA is clearly willing to seed next-gen GPUs into modular non-hyperscale environments that can meet power and cooling constraints. Partnering with Supermicro for validated clusters means Digi Power X is plugging into the mainline NVIDIA ecosystem instead of trying to roll its own stack.
Neocloud vs public cloud: NeoCloudz is classic neocloud. Vertical focus on GPU, limited service surface area, tighter integration with owned facilities, and likely more predictable, contracted usage than hyperscaler on-demand. For enterprises looking at cloud repatriation or hybrid AI, a platform like NeoCloudz can be the GPU-heavy sidecar while core applications stay in existing clouds or on-prem.
Sovereign AI and locality: U.S.-based Tier III modular pods, with a government-ready channel via Supermicro, slot into sovereign AI patterns. Countries, states, and agencies that do not want to depend entirely on hyperscalers can either deploy ARMS pods on their own soil or consume capacity from Digi Power X’s regulated U.S. sites. That distributed, repeatable modular model fits how a lot of sovereign AI programs are thinking about risk and control.
Energy and facilities: The whole story is constrained and enabled by power. They already have 196.7 MW between Alabama and New York and plan another 200 MW in North Carolina by 2028. That is the real asset in this cycle. Water and broader environmental details are not spelled out, but the emphasis on liquid cooling suggests they are designing for efficiency and very high rack densities, which will be critical as power grids tighten and AI loads spike.
Vendor ecosystem dynamics: Supermicro gains another modular data center partner and strengthens its role as the default hardware spine for neocloud GPU providers. Digi Power X gets distribution and credibility without building its own hardware brand. NVIDIA benefits from more validated, non-hyperscale landing zones for B200, which spreads demand and derisks single-channel dependency.
Enterprise AI adoption: For enterprises, this is one more specialized option in a crowded but still under-supplied GPU market. Expect customers to treat Digi Power X as either:
• a GPU “wholesale” provider behind someone else’s AI platform, or
• a direct neocloud partner for heavy training and inference that pairs with their existing cloud or on-prem stack.
It will not replace hyperscalers, but it can absolutely siphon off GPU-hungry workloads and long-term reserved capacity deals.
Signal Strength: High
Source: https://www.digipowerx.com/press-releases/c3jyrrdlxvd30uw7d9dehasb