Red Hat launches new versions of OpenShift, OpenShift AI and Device Edge

Red Hat

Red Hat is betting on artificial intelligence, which is why it is updating already existing features in several areas.

At KubeCon 2024, Red Hat is announcing updates to OpenShift 4.17, OpenShift AI 2.15 and Device Edge 4.17. The updates focus on AI scalability, productivity and low-latency performance for hybrid cloud and edge environments.

OpenShift 4.17: Generative AI and integrated virtualization

The latest version of Red Hat OpenShift, version 4.17, introduces OpenShift Lightspeed. That’s a virtual assistant in technical preview that uses generative AI. Lightspeed allows teams to easily ask technical questions and get immediate answers. This increases productivity without additional technical expertise. Lightspeed integrates with Red Hat OpenShift AI and Red Hat Enterprise Linux AI as model providers. This allows organizations to leverage generative AI in their OpenShift environment.

OpenShift 4.17 includes improvements in virtualization management, such as secure memory transfers to increase the density of virtual machines. A new feature in technical preview allows live migration of data between different storage media, keeping virtual machines active during this migration. Furthermore, a dedicated virtualization management console provides clear access to virtualization functions, increasing efficiency for administrators.

read also

Red Hat launches new versions of OpenShift, OpenShift AI and Device Edge

There is also Advanced Cluster Management for Kubernetes that offers new capabilities in managing visual machines in multiple clusters. These include improved search and filtering and the ability to stop, start, restart or pause VMs directly from the platform.

On the security front, OpenShift 4.17 introduces network isolation for namespaces, user namespaces in pods and a Confidential Compute Attestation Operator for protecting sensitive workloads. These features increase container security without hindering development speed.

OpenShift AI 2.15: Efficiency and reliability in AI models

Red Hat’s OpenShift AI 2.15 supports enterprises in scaling and managing AI models in hybrid environments. This update includes a model registry, currently in technical preview, where companies can centrally manage models and their metadata and apply version control. This model registry helps data scientists and AI engineers deal with models and associated artifacts in a consistent and structured way. Red Hat is also offering the model registry to the open source Kubeflow community.

OpenShift AI 2.15 also introduces data drift detection, which detects anomalies in input data that might affect model performance. In addition, bias detection tools have been added from the TrustyAI community to ensure the fairness and reliability of AI models in production.

For model optimization, OpenShift AI 2.15 provides support for low-rank adapters (LoRA) for efficient fine-tuning of large language models such as Llama 3. This makes AI scalability more accessible by reducing cost and power consumption. In addition, there is integration with Nvidia NIM, making it easier to deploy generative AI applications, and with AMD GPUs via the ROCm platform for additional processing options.

On the model deployment front, the update provides a vLLM runtime option for KServe, optimized for large language models. Also, Red Hat provides support to OCI repositories to securely store and share containerized models.

read also

Red Hat launches new versions of OpenShift, OpenShift AI and Device Edge

Device Edge 4.17: Low latency and reliable performance at the edge

With Red Hat Device Edge 4.17, the company focuses on optimizing edge applications with real-time performance, such as autonomous vehicles and industrial machines. Device Edge combines a lightweight Kubernetes distribution, MicroShift, with Red Hat Enterprise Linux and the Ansible Automation Platform. It is designed to support small, resource-constrained devices with low latency.

The latest version offers improved latency performance, which is critical in environments where decisions must be made within milliseconds. Device Edge 4.17 also expands networking capabilities with IPv6 support and single- and dual-stack configurations, providing flexibility for network management.

For AI at the edge, Device Edge 4.17 provides support for Nvidia Jetson Orin and IGX Orin in technical preview. This enables organizations to run advanced AI models and process data locally without relying on central data centers. The combination of low-latency AI with enhanced networking options increases Device Edge’s deployability for time-critical applications at the edge.

read also

Red Hat launches new versions of OpenShift, OpenShift AI and Device Edge

Availability and future prospects

Updates to OpenShift 4.17 are already available now, those to OpenShift AI 2.15 and Device Edge 4.17 starting in November 2024. With these, Red Hat is betting on scalable and secure solutions for companies looking to expand their AI and edge capabilities while making the most of the hybrid cloud. With these releases, Red Hat aims to support organizations in improving AI productivity, simplifying complex virtualization management and increasing the performance of edge applications with strict latency requirements.