Cisco introduces AI server family and AI PODs

Cisco

Cisco is expanding its data center portfolio with new AI servers and AI PODs to simplify AI infrastructure deployment and support companies in scaling their AI operations.

Cisco has announced a series of new solutions aimed at AI deployments at large enterprises. First and foremost, a new AI server family stands out. This is specifically designed for GPU-intensive AI tasks. It almost goes without saying that Cisco is partnering with Nvidia for the devices. The servers use Nvidia’s HGX supercomputing platform and are consequently fully optimized for AI workloads such as training and inference.

The new servers (UCS C885A M8) are equipped with Nvidia H100 and H200 GPUs. They also include BlueField-3 DPUs for fast and secure connectivity.

In addition, Cisco is introducing so-called AI PODs, which are preconfigured devices equipped with GPUs and specifically tailored to certain AI use cases and industries. AI PODs combine compute power, networking, storage and cloud management, which would help customers efficiently deploy AI solutions.

The AI PODs, based on Cisco Validated Designs (CVDs), offer customers a turnkey approach to deploying AI infrastructure. According to Cisco, the infrastructure packages eliminate the complexity of deploying AI applications and support various AI scale levels.

Accelerating AI adoption

The new products are part of Cisco’s broader strategy to help companies scale AI solutions. The solution, managed through the Intersight platform, provides centralized control and automation for easier management and configuration of AI environments. Cisco is not unique in its ambition. Just about everyone selling boxes today has variants on their shelves containing Nvidia hardware. Reference designs are commonplace: hardware manufacturers are all trying to make the entry threshold for (pricey) AI servers as low as possible.