HPE announces new AI solutions together with Nvidia. Both companies are joining forces with an integrated data layer that helps businesses develop, train, and implement AI applications.
HPE and Nvidia are two peas in a pod, and both companies are eager to show this to the world. The bromance was officially sealed last summer with a joint appearance by CEOs Antonio Neri and Jensen Huang in the Sphere, and since then, the ties have been further strengthened. On the sidelines of Nvidia’s GTC conference, HPE announces new AI-focused solutions.
read also
AI according to Dell, HPE and Lenovo: same story, available in three colors
The new solutions are designed to support businesses throughout the entire AI cycle, from training to executing models. HPE aims to accelerate the time to value realization for generative, agentic, and physical AI together with Nvidia. The announcements focus on improving the efficiency of AI applications and providing the right tools to implement AI at scale.
Private Cloud AI
HPE and Nvidia are building on the joint Private Cloud offering that was launched last year during Discover in a packed Sphere. This platform is being further expanded with an AI Developer System to quickly develop AI applications using Nvidia hardware and 32 TB of integrated storage.
With HPE Data Fabric, AI models can be provided with optimized, high-quality data in hybrid cloud environments. Nvidia provides validated ‘blueprints’ so that companies can implement AI applications faster. HPE will make these applications available from the end of the second quarter or early summer.
Unified Data Layer and Modular Data Centers
HPE and Nvidia are joining forces in an unified data layer that improves collaboration to enhance storage capabilities for AI data. This data layer combines structured and unstructured data and accelerates the AI lifecycle by leveraging HPE’s data networks and storage solutions, such as the Alletra MP X10000.
A final announcement is the AI Mod Pod: a modular, performance-optimized data center for AI and HPC workloads. This mini data center supports up to 1.5 MW per module and offers cost-effective and fast delivery for companies using AI and HPC servers. The ‘Mod Pods’ are available immediately.