IT Brief Australia - Technology news for CIOs & IT decision-makers
Modern data centre aisle liquid cooled server racks powerful gpus cooling pipes

AMD unveils Helios rack built for Meta’s AI data centre vision

Wed, 15th Oct 2025

Meta has introduced a new Open Rack Wide (ORW) form factor for artificial intelligence infrastructure, with AMD showcasing its Helios rack-scale platform built to this specification.

The Helios AI rack represents the physical interpretation of Meta's open standards vision and has been designed to deliver high performance for frontier AI and high-performance computing (HPC) workloads.

The system uses AMD Instinct MI450 Series GPUs and integrates an array of open technologies to meet the requirements of modern data centres.

Open standards focus

The Open Rack Wide (ORW) specification, developed by Meta and contributed to the Open Compute Project (OCP), outlines a double-wide rack optimised for power delivery, cooling, and ease of service. By aligning with this form factor, Helios extends AMD's open hardware approach from component level to complete rack infrastructure, with an emphasis on interoperability and scalability.

Helios is capable of delivering up to 1.4 exaFLOPS FP8 performance and provides 31 TB of HBM4 memory in a single rack. AMD states this makes it suitable for today's trillion-parameter AI models and exascale-class HPC deployments.

As the first AMD reference design at this scale, it is positioned to address the requirements for power, cooling, and multi-vendor compatibility in large-scale AI data centres.

Deployable systems

"Open collaboration is key to scaling AI efficiently," said Forrest Norrod, Executive Vice President and General Manager, Data Centre Solutions Group, AMD. "With 'Helios,' we're turning open standards into real, deployable systems - combining AMD Instinct GPUs, EPYC CPUs, and open fabrics to give the industry a flexible, high-performance platform built for the next generation of AI workloads."

Helios incorporates a number of open compute standards, including OCP DC-MHS, UALink, and Ultra Ethernet Consortium (UEC) architectures. These support both scale-up and scale-out fabrics, allowing for greater flexibility as workloads and data centre requirements change.

The rack layout includes quick-disconnect liquid cooling, a double-wide physical configuration for improved access and service, and Ethernet-based multi-path networking for system reliability.

Industry adoption and collaboration

The reference design approach means that original equipment manufacturers (OEMs), original design manufacturers (ODMs), and hyperscale cloud operators can use Helios as a starting point for their own deployments.

This is intended to lower integration time, enhance interoperability among different vendors, and facilitate more rapid scaling of AI and HPC infrastructure. Helios' development is a result of collaboration within the OCP community, reflecting efforts to standardise and open up the market for large-scale AI computing systems.

According to AMD, the ORW specification and Helios platform represent a strategic step towards a standardised, open AI infrastructure. The flexibility provided by open standards is seen as critical for enterprises seeking to move away from proprietary systems that can restrict integration and future upgrades.

Technical highlights

The Helios system utilises AMD Instinct GPUs, EPYC CPUs, and AMD Pensando networking hardware to provide a platform that can be extended for different scales and requirements.

The platform's liquid cooling and wide form factor are aimed at sustaining performance under high thermal loads typical in AI workloads. By supporting open fabrics and integrating with industry-standard interfaces, the system is designed to address the interoperability concerns cited by cloud and hyperscale operators.

Meta's contribution of the ORW specification to OCP reflects its commitment to advancing open standards in data centre design. By building Helios as a deployment-ready system on these standards, AMD aims to provide a reference point for both hyperscalers and enterprise customers in the rapidly expanding AI infrastructure space.

The debut of the Helios platform marks the continuation of a hardware approach that begins with open silicon and extends through to system and rack integration, within an OCP-compliant framework. This approach is shared across the OCP ecosystem, as AMD and other contributors seek to enable broad industrial adoption of scalable, efficient, and standards-based AI infrastructure.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X