IT Brief Australia - Technology news for CIOs & IT decision-makers
Story image

Red Hat OpenShift AI 2.15 enhances hybrid cloud AI capabilities

Thu, 14th Nov 2024

Red Hat has announced the release of Red Hat OpenShift AI 2.15, an advanced platform aimed at enhancing AI scalability and adaptability across hybrid cloud environments.

The latest version of Red Hat OpenShift AI is designed to support enterprises in managing AI and machine learning workloads efficiently. It introduces features that accommodate the evolving demands of creating AI-enabled applications and maintaining operational consistency.

Joe Fernandes, Vice President and General Manager of the AI business unit at Red Hat, stated, "As enterprises explore the new world of capabilities offered by AI-enabled applications and workloads, we expect interest and demand for underlying platforms to increase as concrete strategies take shape. It's imperative for enterprises to see returns on these investments via a reliable, scalable and flexible AI platform that runs wherever their data lives across the hybrid cloud."

"The latest version of Red Hat OpenShift AI offers significant improvements in scalability, performance and operational efficiency while acting as a cornerstone for the overarching model lifecycle, making it possible for IT organisations to gain the benefits of a powerful AI platform while maintaining the ability to build, deploy and run on whatever environment their unique business needs dictate."

Red Hat OpenShift AI 2.15 focuses on the seamless integration and management of AI models. Amongst its features is a model registry currently available as a technology preview, offering a centralised location to organise, share, and manage AI models and associated metadata.

Data drift detection is another key feature, helping data scientists monitor and verify model reliability by keeping live data aligned with training sets. This is critical for maintaining accuracy in predictions and addressing data mismatches as they occur.

The platform also incorporates bias detection tools from the TrustyAI open source community. These tools are intended to ensure models are fair and unbiased, providing ongoing insights during real-world deployment scenarios.

Additionally, Red Hat OpenShift AI 2.15 introduces efficient model fine-tuning capabilities using low-rank adapters, or LoRA. This method allows for more efficient scaling of AI workloads while reducing costs associated with model training and deployment.

In support of generative AI needs, the new release integrates NVIDIA NIM for streamlined deployment processes. Justin Boitano, Vice President of Enterprise AI Software at NVIDIA, commented, "Enterprises are seeking streamlined solutions to rapidly deploy AI applications. The integration of NVIDIA NIM with Red Hat OpenShift AI 2.15 enhances full-stack performance and scalability across hybrid cloud environments, helping development and IT teams efficiently and securely manage and accelerate their generative AI deployments."

Red Hat OpenShift AI also extends support to AMD GPUs, allowing organisations to utilise AMD ROCm workbench images for training and serving models, thereby expanding hardware compatibility for AI workloads.

Enhanced model serving capabilities are included, such as vLLM serving runtime for KServe, permitting flexible deployment of large language models (LLMs). KServe Model cars add Open Container Initiative repositories for model versioning, enhancing both security and access options.

The release presents improvements in AI training and experimentation, featuring advancements in data science pipelines and comprehensive experiment tracking. Enhancements include hyperparameter tuning with Ray Tune, designed to optimise predictive model training efficiency and accuracy.

These updates aim to facilitate a seamless AI experience suitable for diverse cloud-native and hybrid cloud strategies, supporting organisations as they adapt to changing AI landscapes and demands.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X