NVIDIA Unveils Spectrum-XGS to Unite Data Centers for Giga-Scale AI

NVIDIA has unveiled Spectrum-XGS Ethernet, a new technology designed to connect dispersed data centers into unified, giga-scale AI ‘super-factories.’ The launch adds a new dimension to NVIDIA’s Spectrum-X Ethernet platform, offering what the company describes as the third pillar of AI computing: scale-across infrastructure.

Unlike traditional approaches that focus solely on scaling up within a single system or scaling out across servers in one facility, Spectrum-XGS extends AI clusters across multiple, geographically distributed data centers to function as one.

The announcement comes at a time when the demand for AI infrastructure is rapidly outpacing the capacity of individual facilities. Traditional Ethernet networking equipment, with its latency and performance variability, is often inadequate for the communication demands of advanced AI workloads. Spectrum-XGS Ethernet addresses these challenges by creating high-speed, low-latency links between far-flung facilities, effectively turning them into a single, cohesive AI factory.

NVIDIA founder and CEO Jensen Huang framed the development in sweeping terms, calling the rise of giant-scale AI factories “the essential infrastructure” of what he termed the AI industrial revolution. “We connect data centers across cities, countries, and continents into massive, giga-scale AI super-factories by adding scale-across to scale-up and scale-out capabilities with NVIDIA Spectrum-XGS Ethernet,” he said.

The technology integrates tightly with the existing Spectrum-X platform, using algorithms that dynamically adjust to the physical distance between sites. This allows it to manage long-distance congestion, reduce jitter, and provide predictable performance. NVIDIA says the system nearly doubles the performance of its Collective Communications Library by optimizing multi-GPU and multi-node communication across locations. The result is that data centers, whether separated by a few kilometers or entire continents, can operate as though they were one.

AI Cloud Provider CoreWeave

CoreWeave, a cloud provider specializing in high-performance computing and one of the earliest adopters of NVIDIA infrastructure, will be among the first to deploy Spectrum-XGS Ethernet to link its facilities. “We can integrate our data centers into a single, unified supercomputer with NVIDIA Spectrum-XGS, providing our customers with giga-scale AI that will speed up innovations in every industry,” said Peter Salanki, CoreWeave’s cofounder and Chief Technology Officer.

For multi-tenant, hyperscale AI environments – including what NVIDIA calls the world’s largest AI supercomputers – Spectrum-X Ethernet promises 1.6 times the bandwidth density of conventional Ethernet. The solution pairs NVIDIA Spectrum-X switches with its ConnectX-8 SuperNICs to deliver ultra-low latency, scalable performance, and end-to-end telemetry capabilities.

This development follows a wave of networking innovations from NVIDIA, including its Quantum-X silicon photonics switches, designed to cut power use while enabling the interconnection of millions of GPUs across distributed locations. Spectrum-XGS builds on this momentum by offering enterprises, cloud providers, and AI specialists the ability to construct networks that scale globally without compromising performance.

Now commercially available as part of the NVIDIA Spectrum-X Ethernet platform, Spectrum-XGS Ethernet is positioned as a key enabler of the next phase of AI infrastructure, where multiple sites converge to form globally connected computing clusters.

Similar Posts