Docker aims to simplify the execution of local AI models with Model Runner.
Docker, primarily known for its container software tools, is now also focusing on generative AI. With Docker Model Runner, a new feature in Docker Desktop 4.40, it aims to assist developers in building and running AI models locally on their own hardware and within existing workflows.
Local Execution, More Control
By running models locally, developers gain more control, better performance, lower costs, and improved data privacy. Instead of disjointed configurations, Docker offers an all-in-one solution within the familiar container environment.
For Mac users, there’s GPU acceleration via the built-in graphics chip, making inference faster than through virtual machines, for example. Windows support for GPU acceleration is planned.
Models are packaged as OCI artifacts to manage and distribute them like regular Docker containers via Docker Hub or an internal registry. This enables automatic integration into existing pipelines and the use of familiar tools for automation and access control.
Partnership and Ecosystem
Docker is collaborating with Google, HuggingFace, Spring AI, and VMware to provide access to popular frameworks and models. This allows developers to build, test, and go directly to production in a single environment. All of this happens locally, without relying on the cloud.
read also