IT Brief Australia - Technology news for CIOs & IT decision-makers
Australia
Dell adds AMD Instinct MI350P support for AI servers

Dell adds AMD Instinct MI350P support for AI servers

Fri, 8th May 2026 (Today)
Joseph Gabriel Lagonsin
JOSEPH GABRIEL LAGONSIN News Editor

Dell is adding support for AMD Instinct MI350P PCIe GPUs to its PowerEdge XE7745 and R7725 servers, extending the Dell AI Platform with AMD for on-premises AI deployments.

Support for the new GPUs is due in the summer. The systems will use standard air-cooled designs, allowing installation without data centre redesign. Dell is also introducing a modular architecture to the Dell AI Platform with AMD, enabling customers to expand compute and GPU density over time without changing the overall system design.

The announcement is aimed at businesses running generative and agentic AI workloads in their own infrastructure rather than in public clouds. The updated platform is intended to help organisations move AI projects from pilot programmes into production.

PowerEdge servers equipped with AMD Instinct MI350P PCIe GPUs will offer up to 4,600 peak teraflops using MXFP4 and 144GB of HBM3e memory. Dell described that memory capacity as the highest currently available in a PCIe card accelerator.

The hardware is aimed at tasks including small, medium and large model inference, retrieval-augmented generation pipelines, and agentic AI. AMD's software also works with frameworks including PyTorch, TensorFlow and vLLM.

Modular design

The revised Dell AI Platform with AMD uses a modular design, allowing organisations to start with a smaller configuration and add resources later. It uses AMD Enterprise AI Suite, AMD ROCm and AMD Inference Server across training, fine-tuning, inference and agentic workflows in an on-premises environment.

The focus on on-premises AI reflects sustained demand from companies seeking tighter control over data, security and infrastructure costs as they test and deploy large language model applications. Suppliers across the server market have been reshaping product lines to meet that demand, particularly where customers want to use standard rack systems rather than specialised liquid-cooled installations.

Dell also highlighted a higher-end option for more demanding workloads: the PowerEdge XE9785, which supports AMD MI355X GPUs and EPYC CPUs for foundation model development and large-scale inference.

The latest changes deepen ties between Dell and AMD in AI infrastructure, where both companies are seeking a larger share of enterprise spending as businesses weigh alternatives to systems built around rival chip suppliers. PCIe-based accelerators are of particular interest to customers looking for simpler deployment in existing server estates.

By focusing on PCIe GPUs in air-cooled systems, Dell is positioning the update around ease of installation in established data centres. Customers using the PowerEdge XE7745 and R7725 will be able to add the new accelerators as drop-in components rather than undertake broader facility upgrades.

That could make the offer relevant for projects where infrastructure constraints matter as much as raw chip performance. Memory capacity is also emerging as a key factor in model inference and agentic workloads, especially for businesses seeking to run larger models locally or support more users on shared systems.

Organisations will be able to expand compute and GPU density over time without rearchitecting.