Nvidia AI Enterprise adds support for workloads in containers

NVIDIA AI Enterprise

In a new update, you can now use Nvidia AI Enterprise to run AI-accelerated workloads within vSphere in both Kubernetes containers and virtual machines.

Last year with AI Enterprise, Nvidia launched a suite of tools to virtualize AI workloads on certified systems. The suite includes a set of tools and frameworks built to bolster AI workloads. Such software typically required specialized hardware focused on the AI workloads: a vexing hurdle for mainstream organizations that don’t necessarily want to invest in such systems. With Nvidia AI Enterprise, the software and workloads should also run on more traditional servers.

Nvidia AI Enterprise has been running on VMware vSphere since its inception. To enable virtualization of AI workloads on more classic hardware. However, the underlying systems in question must be certified by Nvidia itself.

Since launch, Nvidia said the number one question was compatibility with VMware Tanzu for Kubernetes containers. In the latest update, Nvidia AI Enterprise 1.1, you can now deploy AI in both containers and virtual machines within a vSphere environment. Customers get an integrated, complete stack of containerized hardware and software that is fully managed and optimized by AI.

VMware vSphere with Tanzu support will also soon be rolled out within Nvidia LaunchPad so customers can test and prototype new AI jobs completely free of charge. This environment is important to Nvidia because it lowers the barrier to trying AI workloads and getting a taste of the new platform.

read also

Nvidia AI Enterprise adds support for workloads in containers