Intel unveils AI chatbot for Paris 2024 Olympic athletes
With the Paris 2024 Olympics set to commence in less than a week, Intel has announced its collaboration with the International Olympic Committee (IOC) to support the approximately 11,000 athletes expected to compete. This collaboration has resulted in the development of a chatbot named AthleteGPT, integrated into the Athlete365 platform.
AthleteGPT is designed to handle inquiries from athletes and provide on-demand information while they are at the Olympic Village in Paris. It aims to help athletes focus on their training and competitions by easing their navigation of the venue and ensuring compliance with the rules and guidelines. This generative AI retrieval-augmented generation (GenAI RAG) solution is powered by Intel Gaudi accelerators and Xeon processors.
Intel's executive vice president and general manager of the Data Center and Artificial Intelligence Group, Justin Hotard, stated, "Through our partnership with the International Olympic Committee, we are demonstrating our dedication to making AI accessible. We're fostering an open playing field that encourages innovation and creativity and enables developers and enterprises to build tailored AI solutions that drive tangible results. By embracing an open and collaborative ecosystem, Intel is transforming ways to help our athletes and pushing the boundaries of what's possible with our customers."
The AthleteGPT chatbot, embedded in the Athlete365 platform, represents a significant step in addressing challenges such as cost, scale, accuracy, development requirements, privacy, and security, which are typically associated with deploying GenAI solutions. By securely leveraging proprietary data, the RAG solution enhances the reliability and timeliness of AI outputs, which is essential in today's data-driven environment.
Intel's approach involves using AI platforms, open standards, and a comprehensive software and systems ecosystem to enable developers to build customised GenAI RAG solutions. This strategy illustrates Intel's commitment to providing open, robust, and composable multi-provider generative AI solutions.
Additionally, Intel has detailed the architecture behind the GenAI RAG solution, created with industry partners to offer an open-source, interoperable solution for easy deployment. Built on the Open Platform for Enterprise AI (OPEA) foundation, this GenAI turnkey solution provides a streamlined approach for enterprises deploying RAG solutions in their data centres. It is designed to be flexible and customisable, integrating components from a catalogue of offerings by multiple OEM systems and industry partners.
The GenAI turnkey solution incorporates OPEA-based microservice components into a scalable RAG solution that deploys seamlessly with orchestration frameworks like Kubernetes and Red Hat OpenShift. It also provides standardised APIs with security and system telemetry.
Intel's collaboration with the open-source framework PyTorch ensures nearly all large language model (LLM) development processes are supported by Intel Gaudi and Xeon technologies. This facilitates the development process on Intel AI systems or platforms. Working with OPEA, Intel has developed an open software stack for RAG and LLM deployment, optimised for the GenAI turnkey solution and incorporating PyTorch, Hugging Face serving libraries, LangChain, and the Redis Vector database.
OPEA also provides open-source, standardised, modular, and heterogeneous RAG pipelines for enterprises. This enables faster integration and delivery of containerised AI for unique use cases, pushing the boundaries of AI possibilities with a detailed, composable framework at the forefront of technology stacks.
Intel's comprehensive enterprise AI stack, coupled with GenAI turnkey solutions, addresses the challenges of deploying and scaling RAG and LLM applications within enterprises and data centres.
Intel has recently announced a collaboration with Google, IBM, and other industry partners to form the Coalition for Secure AI (CoSAI). This initiative aims to enhance trust and security in AI development and deployment.