HPE unveils AI cloud designed for Large Language Models
Hewlett Packard Enterprise expands its HPE GreenLake portfolio to enter the AI cloud market, as announced at HPE Discover 2023.
The HPE GreenLake portfolio will offer large language models for any enterprise, from startups to Fortune 500 companies, to access on-demand in a multi-tenant supercomputing cloud service.
With the introduction of HPE GreenLake for Large Language Models (LLMs), enterprises can privately train, tune, and deploy large-scale AI using a sustainable supercomputing platform that combines HPE's AI software and supercomputers.
HPE GreenLake for LLMs will be delivered with HPE's first partner Aleph Alpha, a German AI startup, to provide users with field-proven and ready-to-use LLM to power use cases requiring text and image processing and analysis.
HPE GreenLake for LLMs is the first in a series of industry and domain-specific AI applications that HPE plans to launch in the future. These applications will include support for climate modelling, healthcare and life sciences, financial services, manufacturing, and transportation.
Antonio Neri, President and CEO at HPE, says: "We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud."
"HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE's proven, sustainable supercomputers."
"Now, organisations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models at scale and responsibly," says Neri.
Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture designed to run a single large-scale AI training and simulation workload at total computing capacity. The offering will simultaneously support AI and HPC jobs on hundreds or thousands of CPUs or GPUs.
HPE GreenLake for LLMs will include access to Luminous, a pre-trained large language model from Aleph Alpha, offered in multiple languages, including English, French, German, Italian and Spanish.
The LLM allows customers to leverage their data, train and fine-tune a customised model to gain real-time insights based on their proprietary knowledge.
Moreover, It empowers enterprises to build and market various AI applications to integrate them into their workflows and unlock business and research-driven value.
Jonas Andrulis, Founder and CEO of Aleph Alpha, says: "By using HPE's supercomputers and AI software, we efficiently and quickly trained Luminous, a large language model for critical businesses such as banks, hospitals, and law firms to use as a digital assistant to speed up decision-making and save time and resources."
"We are proud to be a launch partner on HPE GreenLake for Large Language Models, and we look forward to expanding our collaboration with HPE to extend Luminous to the cloud and offer it as a-service to our end customers to fuel new applications for business and research initiatives."
HPE GreenLake for LLMs will be available on-demand, running on HPE Cray XD supercomputers. This removes the need for customers to purchase and manage a supercomputer that is typically costly, complex and requires specific expertise.
The offering leverages the HPE Cray Programming Environment, a fully integrated software suite to optimise HPC and AI applications, with a complete set of tools for developing, porting, debugging, and tuning code.
In addition, the supercomputing platform provides support for HPE's AI/ML software which includes the HPE Machine Learning Development Environment to train large-scale models rapidly and HPE Machine Learning Data Management Software to integrate, track, and audit data with reproducible AI capabilities to generate trustworthy and accurate models.
HPE GreenLake for LLMs will run in colocation facilities, such as with QScale in North America as the first region to deliver purpose-built design to support the scale and capacity of supercomputing with nearly 100% renewable energy.
HPE is accepting orders now for HPE GreenLake for LLMs and expects additional availability by the end of the calendar year 2023, starting in North America, with availability in Europe expected to follow early next year.
In addition to introducing HPE GreenLake for LLMs, HPE announced an expansion to its AI inferencing compute solutions to accelerate time-to-value for various industries, including retail, hospitality, manufacturing, media and entertainment.
These systems have been tuned to target workloads at the edge and in the data centre, such as Computer Vision at the Edge, Generative Visual AI and Natural Language Processing AI.
These AI solutions are based on the new HPE ProLiant Gen11 servers, which have been purpose-built to integrate advanced GPU acceleration critical for AI performance. HPE ProLiant DL380a and DL320 Gen11 servers boost AI inference performance by more than 5X over previous models.