At KubeCon Europe 2026, the question is shifting from “what can AI do?” to “how do we keep this under control?”
The cloud community has more or less achieved what it worked toward for years: infrastructure is standardized, Kubernetes is the norm, and developers can build on it without worry. Even with hyperscaler solutions, the infrastructure layer is recognizable.
We know by now that the community is growing. But not everything is running perfectly. During the opening keynote, this becomes clear with one figure: Kubernetes has reached 82 percent adoption, while AI in daily production barely reaches seven percent. The foundation is there, but what needs to run on it is not yet at the same level.
Companies are experimenting heavily with AI, but production is lagging behind. Or as Jim Zemlin, CEO of the Linux Foundation, puts it: everyone is in the proof-of-concept phase, but few companies have actually made it operational.
AI is everywhere (but that doesn’t mean it works everywhere yet)
Copilots, chatbots, agents, and internal tools are built-in everywhere. There are just still too many proof-of-concepts that don’t make it into production. We hear this not only at KubeCon, but from every cloud, hosting, and infrastructure specialist.
Many projects work but don’t cross the finish line because they are either too expensive or too complex to manage. So, at the moment, it’s not building AI that’s difficult, but keeping those smart creations running.
This nuance is also reflected in behind-the-scenes conversations. Companies often build new AI solutions separately from their existing infrastructure, only to have to integrate them later. This causes delays, but above all, complexity. Chris Aniszczyk, CTO at CNCF, highlights this problem: teams that “need to do something with AI quickly” build separate stacks, only to realize later that they have to re-secure, scale, and manage them within their existing platform.
Inference is the ‘new problem’
One term that constantly comes up is inference. Not the training of models, but the moment they actually have to do something. That’s where the real challenge lies today. Inference workloads are demanding in terms of infrastructure compared to traditional applications. They require different forms of load balancing, different ways of scaling, and above all, a different perspective on costs. This makes it a problem that cannot be solved with a single tool.
everyone is in the proof-of-concept phase, but few companies have actually made it operational.
Jim Zemlin, CEO of the Linux Foundation
According to CNCF, the entire AI market is shifting in that direction. By the end of this year, inference will already account for the majority of AI, with a growth path they describe as “unprecedented in the tech sector.” It makes sense: training advanced models costs billions and has rapidly become the domain of a handful of specialized companies.
Kubernetes takes on a new role
In this context, Kubernetes takes on a different meaning. It is no longer just the layer on which applications run, but is gradually becoming the place where AI workloads are controlled and managed. Two-thirds of generative AI already runs on Kubernetes today because, according to The Linux Foundation, it is the only environment that offers sufficient flexibility.
That’s why Kubernetes is increasingly becoming the foundation for AI, where decisions are made about resources, scale, and performance. But at the same time, it’s becoming clear that Kubernetes itself must also evolve to take on that role.
Major players are moving to the same layer
You can also see this shift in who is on stage. Nvidia, AWS, Google Cloud, and Red Hat all bring their own stories. Nvidia focuses on the full AI stack, from hardware to software. Cloud providers are integrating Kubernetes more deeply into their AI platforms as a control layer. Red Hat continues to bridge the gap to enterprise environments. Everyone is reaching for the same foundation, and open source is a necessary basis for enabling collaboration.
read also
AI is no longer an experiment; it’s already in your infrastructure
Is AI running, or is it actually working?
The fact that AI workloads run on Kubernetes doesn’t mean they run optimally. Many companies are still in a phase reminiscent of early cloud adoption. Workloads aren’t truly optimized yet, costs are rising quickly, and tools haven’t yet adapted to these new workflows.
In Europe, an additional factor comes into play. Private cloud remains the dominant choice there according to the State of Cloud report, with about 39 percent of developers opting for it. Hybrid cloud is growing, but lags slightly behind other regions. This caution is related to regulations, which still dictate the pace of innovation here.
Aniszczyk also speaks of a new pressure on infrastructure teams. Not only do they have to integrate AI, but they also have to deal with complexity, tools, and security risks.
From infrastructure to intelligence
Kubernetes is finally doing what it promised. The infrastructure works, is scalable, and is widely used. But AI is not yet at that level. The technology is there, the interest is too, but operationalizing it isn’t happening yet. And that is exactly where the biggest challenge for companies lies today. It’s not what AI can do that’s the problem, but how you make it reliable, affordable, and scalable.
read also
