Mistral launches Ministral models for smartphones and laptops

mistral ai

Mistral is launching two new LLMs tailored to run locally on devices. Ministral 3B has three billion parameters, Ministral 8B eight billion.

Mistral announces in a blog Les Ministraux. The French AI company’s latest series of LLMs consists of two models. The designation Small Language Models may be more appropriate here, as Mistral has deliberately kept its models smaller to fit on a laptop or smartphone.

As usual, the name of the models refers to the number of parameters. Ministral 3B has three billion parameters. If it can be a little more, Mistral also has the Ministral 8B model with eight parameters. According to Mistral, the models score better than Mistral 7B on every benchmark and also more than hold their own against Google’s Gemma 2 (2B) and Meta’s Llama 3.1 (8B) and Llama 3.2 (3B).

Tailored to devices

Mistral is clearly following AI industry trends well. The Ministral models are not designed with a huge data center in mind, but are just deliberately made smaller so they can run locally on a laptop or smartphone, without an intermediate step into the cloud. According to Mistral, there is increasing demand for this among its customers because of under privacy arguments. It can only help sell AI PCs and smartphones.

Both Ministral 3B and Ministral 8B can process up to 128,000 tokens per prompt. The context windows are thus more or less equal to OpenAI’s GPT-4 Turbo.

Mistral made its presence felt a year ago with the launch of Mistral 7B, a model offered completely free of charge. Since then, the range has expanded rapidly. In September, the company launched its first model with multimodal capabilities, Pixtral 12B. As a European player, Mistral gets to sit at the table with U.S. AI leaders such as OpenAI, Google and Anthropic.

newsletter

Subscribe to ITdaily for free!

  • This field is for validation purposes and should be left unchanged.