MSI Unveils Next-Gen Server Solutions to Power AI and Data Centers

™

MSI, a number one world supplier of high-performance server options, showcases its next-generation computing and AI improvements at SuperComputing 2025 (SC25), Booth #205. MSI introduces its ORv3 rack answer and a complete portfolio of power-efficient, multi-node, and extra AI-optimized platforms constructed on NVIDIA MGX and desktop NVIDIA DGX, designed for high-density environments and mission-critical workloads. These modular, scalable and rack-scale options are engineered for optimum efficiency, vitality effectivity, and flexibility, enabling trendy and next-generation knowledge facilities to speed up deployment and scale with ease.”Through shut collaboration with trade leaders AMD, Intel, and NVIDIA, MSI continues to drive innovation throughout the info middle ecosystem,” stated Danny Hsu, General Manager of the Enterprise Platform Solutions at MSI. “Our purpose is to ship scalable, energy-efficient infrastructure that empowers clients to speed up AI improvement and next-generation computing with efficiency, reliability, and flexibility at scale.”

Scaling Data Center Performance — From DC-MHS Architecture to Rack Solutions

MSI’s knowledge middle constructing blocks are developed on the DC-MHS (Datacenter Modular Hardware System) structure, spanning host processor modules, Core Compute servers, Open Compute servers, and AI computing servers. This modular design standardizes {hardware} elements, BMC structure, and type elements, simplifying operations and lowering deployment complexity. With EVAC CPU heatsink help, knowledge facilities can preserve thermal effectivity whereas quickly adapting to the rising calls for of AI, analytics, and compute-intensive workloads. MSI’s modular strategy empowers operators to deploy next-generation infrastructure sooner and obtain time-to-market worth.

ORv3 Rack — Designed for Next-Generation Data Centers

MSI’s ORv3 21″ 44OU rack is a totally validated, built-in answer that mixes energy, thermal, and networking programs to streamline engineering and speed up deployment in hyperscale environments. Featuring sixteen CD281-S4051-X2 2OU DC-MHS servers, the rack makes use of centralized 48V energy cabinets and front-facing I/O, maximizing area for CPUs, reminiscence, and storage whereas sustaining optimum airflow and simplifying upkeep.

Single-Socket AMD EPYC™ 9005 Server in ORv3 Architecture:

  • CD281-S4051-X2: 2OU 2-node server with 12 DDR5 DIMM slots and 12 E3.S 1T PCIe 5.0 x4 NVMe bays per node

DC-MHS Core Compute Servers — High-Density, Scalable Data Center Solutions

MSI’s Core Compute platforms maximize rack density and useful resource effectivity by integrating a number of compute nodes right into a single high-density chassis. Each node is powered by both AMD EPYC 9005 Series processors (up to 500W TDP) or Intel® Xeon® 6 processors (up to 500W/350W TDP). Available in 2U 4-node and 2U 2-node configurations, these platforms ship distinctive thermal efficiency and scalability for at this time’s knowledge facilities.

Single-Socket AMD EPYC 9005 Servers

  • CD270-S4051-X4: 2U 4-node server with 12 DDR5 DIMM slots and 3 PCIe 5.0 x4 U.2 NVMe bays per node. 
  • CD270-S4051-X2: 2U 2-node server with 12 DDR5 DIMM slots and 6 PCIe 5.0 x4 U.2 NVMe bays per node.

Single-Socket Intel Xeon 6 Servers

  • CD270-S3061-X4: 2U 4-node server with 16 DDR5 DIMM slots and 3 PCIe 5.0 x4 U.2 NVMe bays per node.
  • CD270-S3071-X2: 2U 2-node server with 12 DDR5 DIMM slots and 6 PCIe 5.0 x4 U.2 NVMe bays per node.

DC-MHS Enterprise Servers — High-Efficiency Platforms for Cloud Workloads

Built on the DC-MHS structure, MSI’s enterprise server platforms ship distinctive reminiscence capability, intensive I/O choices, and excessive TDP CPU compatibility to deal with demanding cloud, virtualization, and storage functions. Supporting each AMD EPYC 9005 Series and Intel Xeon 6 processors, these modular options present versatile efficiency for numerous knowledge middle workloads.

Single-Socket AMD EPYC 9005 Servers

  • CX271-S4056: 2U server with 24 DDR5 DIMM slots and configurations of 8 or 24 PCIe 5.0 U.2 NVMe bays
  • CX171-S4056: 1U server with 24 DDR5 DIMM slots and 12 PCIe 5.0 U.2 NVMe bays.

Dual-Socket Intel Xeon 6 Servers

  • CX270-S5062: 2U server with 32 DDR5 DIMM slots and configurations of 8 or 24 PCIe 5.0 U.2 NVMe bays.
  • CX170-S5062: 1U server with 32 DDR5 DIMM slots and 12 PCIe 5.0 U.2 NVMe bays.

Next-Generation AI Solutions Accelerated by NVIDIA

MSI introduces a brand new period of AI computing options, constructed on the NVIDIA MGX and NVIDIA DGX Station reference architectures. The lineup consists of AI servers and AI station supporting the most recent NVIDIA Hopper GPUs, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, and NVIDIA Blackwell Ultra GPUs, engineered to meet numerous deployment wants, from large-scale knowledge middle coaching to edge inferencing and AI improvement on the desktop.

MSI’s AI servers are purpose-built for high-performance computing and AI workloads. The 4U AI platforms supply versatile configurations with each Intel Xeon and AMD EPYC processors, supporting up to 600W GPUs for optimum efficiency. These platforms are perfect for giant language fashions (LLMs), deep studying coaching, and NVIDIA Omniverse workloads.

AI Servers

  • CG481-S6053: Dual AMD EPYC 9005 CPUs, eight PCIe 5.0 x16 FHFL dual-width GPU slots, 24 DDR5 DIMMs, eight 2.5-inch U.2 NVMe bays, and eight 400G Ethernet ports powered by NVIDIA ConnectX-8 SuperNICs.
  • CG480-S5063: Dual Intel Xeon 6 CPUs, eight PCIe 5.0 x16 FHFL dual-width GPU slots, 32 DDR5 DIMMs, and twenty PCIe 5.0 E1.S NVMe bays.
  • CG290-S3063: 2U AI server powered by a single Intel Xeon 6 CPU with 16 DDR5 DIMMs and 4 FHFL dual-width GPU slots (up to 600W every), ultimate for edge computing and small-scale inference deployments.

AI Station

For builders demanding knowledge center-level efficiency in a workstation type issue, the MSI AI Station CT60-S8060 brings the ability of the NVIDIA DGX Station to the desktop. Built with the NVIDIA GB300 Grace Blackwell Ultra Desktop Superchip and up to 784GB of unified reminiscence, it delivers unprecedented compute efficiency for growing, coaching, and deploying large-scale AI fashions all from the deskside.

Supporting Resources:

Watch the MSI’s 4U & 2U NVIDIA MGX AI platform, constructed on NVIDIA accelerated computing to ship the efficiency for tomorrow’s AI workloads.

Discover how MSI’s OCP ORv3-compatible nodes ship optimized efficiency for hyperscale cloud deployments.

The put up MSI Unveils Next-Gen Server Solutions to Power AI and Data Centers first appeared on AI-Tech Park.

Similar Posts