Red Hat & Run:ai team up to enhance AI operations efficiency
Red Hat is joining hands with Run:ai, a trailblazer in AI optimisation and orchestration. The collaboration seeks to introduce Run:ai's resource allocation capabilities to Red Hat OpenShift AI. This will streamline AI operations while boosting the efficiency of the underlying infrastructure, aiding enterprises in exploiting their AI resources to the fullest. This synergy maximises both human and hardware-driven workflows on a reliable MLOps platform that is designed for building, tuning, deploying and monitoring AI-enabled applications and models on a vast scale.
Being data-intensive, AI workflows drive GPUs — the core engines that run these tasks, enabling functions such as model training, inference, experimentation and more. However, employing these specialised processors can be expensive, particularly when they're used across distributed training jobs and inferencing. Meeting this crucial requirement for the optimisation of GPU resources, Red Hat and Run:ai have worked on Run:ai's certified OpenShift Operator on Red Hat OpenShift AI. This aids users in scaling and optimising AI workloads irrespective of their location.
The cloud-native compute orchestration platform by Run:ai on Red Hat OpenShift AI brings about considerable improvement in GPU scheduling for AI workloads. A dedicated workload scheduler allows for easy prioritisation of crucial workloads, ensuring adequate resources are in place to back such tasks. It also works on fractional GPU and monitoring capabilities to dynamically allocate resources in line with pre-set priorities and policies to boost overall infrastructure efficiency. The platform also presents superior control and visibility over shared GPU infrastructure, simplifying access and resource allocation among IT, data science, and application development teams.
The certified OpenShift Operator by Run:ai is now available. Red Hat and Run:ai intend to build on this collaboration further, incorporating more integration capabilities for Run:ai on Red Hat OpenShift AI. The aim is to offer more seamless customer experiences and expedite the integration of AI models into production workflows, ensuring higher consistency.
Steven Huels, vice president and general manager of AI Business Unit, Red Hat, elaborated, "Increased adoption of AI and demand for GPUs require enterprises to optimise their AI platform to get the most out of their operations and infrastructure, no matter where they live on the hybrid cloud. Through our collaboration with Run:ai, we're enabling organisations to maximise AI workloads at scale without sacrificing the reliability of an AI/ML platform or valuable GPU resources, wherever needed."
Expressing his enthusiasm for the partnership, Omri Geller, CEO and founder Run:ai, stated, "We are excited to partner with Red Hat OpenShift AI to enhance the power and potential of AI operations. By leveraging Red Hat OpenShift's MLOps strengths with Run:ai's expertise in AI infrastructure management, we are setting a new standard for enterprise AI, delivering seamless scalability and optimised resource management."