QScale is among the first colocation providers to host large-scale NVIDIA GB300 NVL72 clusters. Our infrastructure was designed for exactly this moment: the power density, liquid cooling, and floor capacity that Blackwell Ultra demands — already in production at Q01.
Blackwell Ultra GPUs per rack
ExaFLOPS of FP4 compute per rack
HBM3e memory per rack
NVLink 5 bandwidth per rack
The NVIDIA GB300 NVL72 is the most powerful rack-scale AI system ever produced — 72 Blackwell Ultra GPUs, 36 Grace CPUs, and over 1 exaFLOP of FP4 compute in a single liquid-cooled rack. It requires infrastructure that most data centers simply cannot provide. Q01 was designed for it from day one.

GB300 NVL72 racks are fully liquid-cooled — 90% of heat goes to liquid, 10% to air. Q01 was engineered with chilled water loops, coolant distribution units, and direct-to-chip liquid cooling infrastructure before Blackwell was announced. No retrofitting required.
Each GB300 NVL72 rack draws up to 155kW. At scale, a 64-rack cluster needs nearly 10MW of contiguous power. Q01 delivers 600kW+ per cabinet position and 142MW of total campus capacity — enough for the largest training clusters in production today.
A fully loaded GB300 NVL72 rack weighs over 3,000 pounds. Q01's reinforced concrete floors were designed for this from the start — no structural upgrades needed, no load-bearing compromises.
Canada's cold climate delivers free cooling for up to 80% of the year. For GB300 clusters generating hundreds of kilowatts of heat per rack, this translates directly to greater efficiency and a PUE under 1.2.
Q01 was engineered with chilled water loops, coolant distribution units, and direct-to-chip liquid cooling before Blackwell was announced. GB300 racks send 90% of heat to liquid — our infrastructure was built for exactly this.
GB300 NVL72 uses NVLink 5 for intra-rack communication at 130 TB/s aggregate bandwidth. For scale-out across racks, Q01 supports both NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet at 800 Gb/s per GPU.
Q01's phased design means you can start with a single rack and expand to multi-megawatt clusters without changing facilities. Each phase adds ~39,000 sq. ft. of white space, pre-plumbed for liquid cooling and pre-wired for high-density power delivery.
QScale has already delivered large-scale AI infrastructure in partnership with HPE — on time, on budget, with a PUE under 1.2 and 20%+ energy savings. The same operational rigor applies to every GB300 deployment at Q01.
Our team is ready to scope your Blackwell cluster requirements.
142MW of secured power, modular phases, and 600kW+ rack density — scale from first deployment to full production.
Renewable hydroelectricity, free cooling 80% of the year, and waste heat recovered for local agriculture.
Our 142MW AI colocation campus in Quebec City — purpose-built for high-density liquid-cooled workloads.