Red Hat is launching Red Hat AI 3. This platform combines AI versions of RHEL and OpenShift, among others, into a single solution designed to simplify the rollout of AI into production.
Red Hat is launching Red Hat AI 3. With this update to its AI platform, the open-source software specialist aims to help companies successfully bring AI inference to scale in production within hybrid environments.
The platform is built on AI flavors of established technology. It combines the technologies of Red Hat AI Inference Server, Red Hat Enterprise Linux AI (RHEL AI), and Red Hat OpenShift AI. The interplay of these AI variants of trusted open-source solutions from Red Hat should thus enable AI workloads to move faster from proof-of-concept to production.
MaaS, Hub, and Studio
Red Hat highlights four key innovations with the launch of Red Hat AI 3:
- Model as a Service (or MaaS): With this, Red Hat makes it possible to centrally manage AI models and make them available on demand within the organization, for both developers and applications. This way, companies retain control over their data and infrastructure.
- AI Hub: With the AI Hub, users gain access to a curated catalog of models and tools to manage the entire lifecycle of models. The hub also includes an environment for managing the deployment of models via OpenShift AI.
- Gen AI Studio: The Gen AI Studio is aimed at developers. The studio offers a place where they can experiment with models and build applications based on large language models and techniques such as retrieval-augmented generation (RAG).
- New Models: Red Hat is also adding optimized LLMs, including OpenAI’s gpt-oss, DeepSeek-R1, and speech models such as Whisper and Voxtral Mini.
OpenShift AI
Within Red Hat AI 3, Red Hat OpenShift AI 3.0 plays a significant role. OpenShift AI 3.0 now also supports agent-based AI applications. For this, Red Hat introduces a Unified API based on the Llama Stack and adopts the Model Context Protocol (MCP). This protocol aids communication between models on the one hand and external tools and data on the other. It has gained popularity in recent months and is supported by key players such as Snowflake and Salesforce.
read also
Red Hat Summit Brussels: from Open Source Values to Concrete AI Enabler
Additionally, Red Hat AI 3 includes a toolkit for model adaptation, built on InstructLab technology. This allows developers to process their own data, generate synthetic data, and train and evaluate models.
From Idea to Efficient Application
For Red Hat AI 3, Red Hat emphasizes inference. With the inclusion of llm-d, an extension of the vLLM project, the platform now also supports distributed inference on Kubernetes. This should lead to lower costs, faster response times, and more efficient utilization of accelerators such as GPUs from Nvidia and AMD.
Red Hat acknowledges the complexity of successfully bringing AI projects into production. With Red Hat AI 3, the company hopes to offer a platform that removes obstacles. Red Hat hopes that the combination of tools and integrations, built on a known and trusted Linux foundation, will suffice for large enterprises to convert their AI concepts into applications that provide added value in production and ultimately generate revenue.
The launch of the platform aligns with Red Hat’s broader ambition. The company aims to increasingly position itself as an AI specialist and not just as a key open-source player.
