The first Nvidia server racks with a power demand of 1 MW are already in the pipeline. Cooling and powering these units will be a major challenge. Schneider Electric is looking forward to tackling that challenge, positioning itself as the specialist of choice during the Innovation Summit in Copenhagen.
Schneider Electric once again shows itself to be Nvidia’s biggest supporter, though it’s becoming increasingly clear that the love is mutual. On stage at the Innovation Summit in Copenhagen, Pankaj Sharma, EVP Secure Power at Schneider Electric, and Steve Carlini, Chief Advocate AI and Data Center, praise the tremendous pace of innovation set by Jensen Huang’s company. However, this pace seems increasingly unsustainable without proper infrastructure.
From Leisurely to Accelerated
“The data center sector worked exclusively with x86 CPUs for decades, and innovation was actually quite leisurely,” says Carlini. “With accelerated computing, everything has gained momentum. 70 percent of new data centers are being built with AI in mind. The demand for AI is rising faster than capacity can be built.”
Along with that demand, the density of systems supporting AI workloads is also increasing. These servers are built by Nvidia, around the Hopper, Blackwell
576 GPUs under one Roof
The increase in density is no joke. “In 2022, the first GPT models were still trained on racks filled with Nvidia A100 chips, consuming about 25 kW per rack,” says Vladimir Prodanovic. He should know, as Principal Program Manager for Nvidia, he helped build the various clusters on which successive versions of ChatGPT were trained.
“That’s manageable with air cooling,” he continues. Racks with Hopper H100 chips already demanded 40 kW power and sparked interest in liquid cooling. “An NVL72 rack containing 72 Blackwell Ultra GPUs will require approximately 145 kW of power.”
An NVL72 rack containing 72 Blackwell Ultra GPUs will require approximately 145 kW of power.
Vladimir Prodanovic, Principal Program Manager Nvidia
But it doesn’t stop there. By 2026, Nvidia wants to launch its Rubin chips and combine them in a 200 kW rack. Prodanovic: “Racks of 385 kW are planned, and by 2028, an NVL 576 rack with 576 densely packed Feynman accelerators should exceed the 1 MW threshold.
Supporting Architecture
Supporting such hardware requires a different architecture. “Jensen Huang is very good at realizing his roadmaps,” Carlini laughs, “but the effects on power supply and cooling are less clear. For that, the industry looks to us.”

Schneider Electric works very closely with Nvidia to build systems and blueprints that can deliver sufficient power and remove enough heat. “This is really about co-design,” Carlini clarifies. “Schneider and Nvidia learn a lot during this phase.”
The result of this collaboration is an architecture where the IT component of the data center is actually no longer so (physically) large. The large server hall of the past is disappearing and being replaced by a complex site where the white room containing the servers is surrounded by pumps, electrical systems such as power distribution units, batteries, generators, and cooling installations.
Minimal Margin for Error
“The higher the density and capacity, the smaller the margin for error in the designs,” says Kevin Brown of Schneider Electric while guiding us along various demo installations on the Summit’s exhibition floor. “Everything must be tightly integrated, from the moment power enters, through the UPS systems, to delivery at the racks.”
That power delivery isn’t straightforward. For high-density racks, Schneider Electric developed a ‘sidecar’: a power rack to place next to a compute rack that delivers 800 volts DC to the hungry Nvidia systems. “We’re looking at more, 1,500 volts is in sight,” Carlini explains.
This comes with challenges. The higher the voltages, the less suitable data centers become for people to walk around in. Prodanovic: “To replace a blade in a data center with servers receiving 1,500 volts from the PDUs, you need qualified personnel.”
Still Early Days
Innovation in this area is still in full swing. “We’ve been building cloud data centers for 25 years,” says John Wernvik, CMO of EcoDataCenter in Sweden, where translation specialist DeepL houses its Nvidia GB200 clusters. “With AI, we’ve only been at it for 2.5 years. We’re at the beginning and need to discover together what the standards will be.”
As far as Schneider is concerned, the sidecar concept isn’t an endpoint, as it takes up too much space in the IT room. “Power and cooling are shifting as much as possible to the outside,” Carlini predicts.
Not just Plumbing
For cooling, the margins aren’t any larger. Brown points to a Cooling Distribution Unit in a demo rack. “Coolant must be compatible with server manufacturer specifications, with the right flow rate and correct connections. Everything must be precise. Moreover, even with liquid cooling, there’s still an air-cooled component.

When cooling a 135 kW rack with liquid, you still need to remove about 15 kW of residual heat through the air. If the airflow between servers shifts even slightly, the server can start throttling and you won’t get the expected performance.
The tolerance for errors in designing a data center for dense and high-performance AI racks is therefore very small. Those who need to work this out themselves need a lot of time and expertise that isn’t ubiquitous. Schneider Electric feels called upon to provide the solution.
read also
Schneider Electric and Motivair Unveil End-To-End Portfolio for AI Datacenter Cooling
With the acquisition of Motivair, the company has an end-to-end solution to provide the complete framework for power and cooling for servers. That’s why Schneider developed reference designs.
Detailed Blueprints
“These are real designs, not just collections of products,” says Brown, proudly flipping through some pages on a screen. We see technical drawings that go into deep detail about machine placement, as well as power connections and plumbing for liquid cooling.

Schneider Electric tries to stay at least one generation ahead of what Nvidia builds. Nvidia, in turn, can be happy with Schneider’s work, as the increasingly dense racks are nothing more than heavy boxes without proper electricity and cooling. At the Innovation Summit, Schneider Electric is increasingly emerging as the preferred subcontractor for the physical infrastructure that a Nvidia-based AI data center needs.
A less Powerful Ferrari
This doesn’t mean Schneider can retire its classic solutions, or that regular cloud data centers can suddenly close their books. “Classic non-AI data centers are also steadily growing at twenty to twenty-five percent per year,” notes Wernvik.
Moreover, not everyone needs a data center with extreme density, Prodanovic agrees. “Everyone dreams of driving a Ferrari F40, but there are other Ferraris too,” he states, subtly suggesting that every Nvidia server equals at least some model of Ferrari.
In practice, extreme density with 400 kW servers or eventually 1 MW is interesting in certain scenarios. In places where gigawatts of power are available, it makes sense to maximize their use. More GPUs in a smaller area are worth it in that case.
When Megawatts are Scarce
“That won’t happen ererywhere right away,” says Martijn Aerts, Vice President for Secure Power in Belgium and the Netherlands. The Netherlands faces limitations in power availability, resulting in waiting lists. “The Netherlands needs to be smart with every megawatt found, though there’s still some to distribute. In Belgium, more is possible right now, but we need to think carefully.”
read also
When AI Becomes the Solution for AI: Optimization, Efficiency, and Sustainability in Data Centers
“If someone wants to set up such a giant AI factory in Belgium, it could face quite a bit of negative perception,” he thinks. “The vision of large AI data centers and high density is a global story about the future that Schneider Electric envisions, but it needs some translation to fit our region of the world.”
Edge Deployment
Aerts agrees with Prodanovic: a large data center filled with the most modern racks isn’t immediately on the agenda here. “We do have many innovative data center players today. What we can do is place one rack here and there. We don’t need to build new data centers or retrofit complete sites for that. However, one rack already offers quite a lot of AI computing power.”
When that local inference takes off and the possibilities become clear, then the demand for larger implementations may grow.
Martijn Aerts, Vice President Secure Power België & Nederland , Schneider Electric
Aerts thinks that this kind of Edge deployment of high-performance AI racks is the key. “Then we can run applications locally, for example in medicine. When that local inference takes off and the possibilities become clear, then the demand for larger implementations may grow.”
Starting Small is Still Starting
At this moment, Aerts considers it most important to roll out distributed AI capacity and start with that. An AI factory with four gigawatts of 1 MW racks won’t appear immediately in the outskirts of Brussels, but there is room for smaller initiatives. This is evidenced by, among others, the Penta Infra data center that will house the VUB’s water-cooled and accelerated Tier-1 supercomputer.
On a global scale, Schneider Electric in Copenhagen is primarily telling the world that it’s ready to support the latest and greatest. If someone wants to roll out an AI cluster full of Nvidia Vera Rubin chips soon, Schneider Electric has the blueprints ready to support such a project in detail.
Isn’t Schneider Electric afraid the AI bubble will burst? After all, its vision for the future relies heavily on continued investment in powerful and expensive AI data centres by large companies whose market value has risen sharply in the short term. ‘There will indeed be winners and losers,’ Carlini admits, but he does not see a bubble. ‘We work closely with all the major players. They place orders about three years in advance and pay for them in advance. From our point of view, things look very solid, at least for the next three years.’
