Meta and Oracle choose NVIDIA Spectrum-X for AI data centres

Meta and Oracle are upgrading their AI data centres with NVIDIA’s Spectrum-X Ethernet networking switches — know-how constructed to deal with the rising calls for of large-scale AI methods. Both firms are adopting Spectrum-X as a part of an open networking framework designed to enhance AI coaching effectivity and speed up deployment throughout large compute clusters.
Jensen Huang, NVIDIA’s founder and CEO, stated trillion-parameter fashions are remodeling data centres into “giga-scale AI factories,” including that Spectrum-X acts because the “nervous system” connecting hundreds of thousands of GPUs to coach the most important fashions ever constructed.
Oracle plans to make use of Spectrum-X Ethernet with its Vera Rubin structure to construct large-scale AI factories. Mahesh Thiagarajan, Oracle Cloud Infrastructure’s govt vice chairman, stated the brand new setup will enable the corporate to attach hundreds of thousands of GPUs extra effectively, serving to prospects prepare and deploy new AI fashions quicker.
Meta, in the meantime, is increasing its AI infrastructure by integrating Spectrum-X Ethernet switches into the Facebook Open Switching System (FBOSS), its in-house platform for managing community switches at scale. According to Gaya Nagarajan, Meta’s vice chairman of networking engineering, the corporate’s next-generation community should be open and environment friendly to assist ever-larger AI fashions and ship companies to billions of customers.
Building versatile AI methods
According to Joe DeLaere, who leads NVIDIA’s Accelerated Computing Solution Portfolio for Data Centre, flexibility is essential as data centres develop extra advanced. He defined that NVIDIA’s MGX system gives a modular, building-block design that lets companions mix totally different CPUs, GPUs, storage, and networking elements as wanted.
The system additionally promotes interoperability, permitting organisations to make use of the identical design throughout a number of generations of {hardware}. “It gives flexibility, quicker time to market, and future readiness,” DeLaere stated to the media.
As AI fashions turn into bigger, energy effectivity has turn into a central problem for data centres. DeLaere stated NVIDIA is working “from chip to grid” to enhance vitality use and scalability, collaborating intently with energy and cooling distributors to maximise efficiency per watt.
One instance is the shift to 800-volt DC energy supply, which reduces warmth loss and improves effectivity. The firm can be introducing power-smoothing know-how to scale back spikes on {the electrical} grid — an method that may minimize most energy wants by as much as 30 per cent, permitting extra compute capability inside the identical footprint.
Scaling up, out, and throughout
NVIDIA’s MGX system additionally performs a job in how data centres are scaled. Gilad Shainer, the corporate’s senior vice chairman of networking, advised the media that MGX racks host each compute and switching elements, supporting NVLink for scale-up connectivity and Spectrum-X Ethernet for scale-out development.
He added that MGX can join a number of AI data centres collectively as a unified system — what firms like Meta have to assist large distributed AI coaching operations. Depending on distance, they will hyperlink websites by way of darkish fibre or extra MGX-based switches, enabling high-speed connections throughout areas.
Meta’s AI adoption of Spectrum-X displays the rising significance of open networking. Shainer stated the corporate will use FBOSS as its community working system however famous that Spectrum-X helps a number of others, together with Cumulus, SONiC, and Cisco’s NOS by way of partnerships. This flexibility permits hyperscalers and enterprises to standardise their infrastructure utilizing the methods that greatest match their environments.
Expanding the AI ecosystem
NVIDIA sees Spectrum-X as a technique to make AI infrastructure extra environment friendly and accessible throughout totally different scales. Shainer stated the Ethernet platform was designed particularly for AI workloads like coaching and inference, providing as much as 95 % efficient bandwidth and outperforming conventional Ethernet by a large margin.
He added that NVIDIA’s partnerships with firms reminiscent of Cisco, xAI, Meta, and Oracle Cloud Infrastructure are serving to to carry Spectrum-X to a broader vary of environments — from hyperscalers to enterprises.
Preparing for Vera Rubin and past
DeLaere stated NVIDIA’s upcoming Vera Rubin structure is predicted to be commercially obtainable within the second half of 2026, with the Rubin CPX product arriving by yr’s finish. Both will work alongside Spectrum-X networking and MGX methods to assist the following era of AI factories.
He additionally clarified that Spectrum-X and XGS share the identical core {hardware} however use totally different algorithms for various distances — Spectrum-X for inside data centres and XGS for inter–data centre communication. This method minimises latency and permits a number of websites to function collectively as a single massive AI supercomputer.
Collaborating throughout the ability chain
To assist the 800-volt DC transition, NVIDIA is working with companions from chip degree to grid. The firm is collaborating with Onsemi and Infineon on energy elements, with Delta, Flex, and Lite-On on the rack degree, and with Schneider Electric and Siemens on data centre designs. A technical white paper detailing this method might be launched on the OCP Summit.
DeLaere described this as a “holistic design from silicon to energy supply,” making certain all methods work seamlessly collectively in high-density AI environments that firms like Meta and Oracle function.
Performance benefits for hyperscalers
Spectrum-X Ethernet was constructed particularly for distributed computing and AI workloads. Shainer stated it gives adaptive routing and telemetry-based congestion management to remove community hotspots and ship steady efficiency. These options allow increased coaching and inference speeds whereas permitting a number of workloads to run concurrently with out interference.
He added that Spectrum-X is the one Ethernet know-how confirmed to scale at excessive ranges, serving to organisations get the perfect efficiency and return on their GPU investments. For hyperscalers reminiscent of Meta, that scalability helps handle rising AI coaching calls for and preserve infrastructure environment friendly.
Hardware and software program working collectively
While NVIDIA’s focus is commonly on {hardware}, DeLaere stated software program optimisation is equally vital. The firm continues to enhance efficiency by way of co-design — aligning {hardware} and software program improvement to maximise effectivity for AI methods.
NVIDIA is investing in FP4 kernels, frameworks reminiscent of Dynamo and TensorRT-LLM, and algorithms like speculative decoding to enhance throughput and AI mannequin efficiency. These updates, he stated, be sure that methods like Blackwell proceed to ship higher outcomes over time for hyperscalers reminiscent of Meta that depend on constant AI efficiency.
Networking for the trillion-parameter period
The Spectrum-X platform — which incorporates Ethernet switches and SuperNICs — is NVIDIA’s first Ethernet system purpose-built for AI workloads. It’s designed to hyperlink hundreds of thousands of GPUs effectively whereas sustaining predictable efficiency throughout AI data centres.
With congestion-control know-how reaching as much as 95 per cent data throughput, Spectrum-X marks a significant leap over normal Ethernet, which usually reaches solely about 60 per cent resulting from move collisions. Its XGS know-how additionally helps long-distance AI data centre hyperlinks, connecting amenities throughout areas into unified “AI tremendous factories.”
By tying collectively NVIDIA’s full stack — GPUs, CPUs, NVLink, and software program — Spectrum-X gives the constant efficiency wanted to assist trillion-parameter fashions and the following wave of generative AI workloads.
(Photo by Nvidia)
See additionally: OpenAI and Nvidia plan $100B chip deal for AI future

Want to study extra about AI and massive data from trade leaders? Check out AI & Big Data Expo happening in Amsterdam, California, and London. The complete occasion is a part of TechEx and is co-located with different main know-how occasions, click on here for extra info.
AI News is powered by TechForge Media. Explore different upcoming enterprise know-how occasions and webinars here.
The put up Meta and Oracle choose NVIDIA Spectrum-X for AI data centres appeared first on AI News.