Mistral Small 3.1 Launched: Powerful Capabilities in a Compact Model

mistral ai

According to Mistral AI, Small 3.1 surpasses the performance of comparable competing models.

Mistral AI has introduced a new lightweight model called Mistral Small 3.1. Small 3.1 is open-source and therefore highly accessible. The AI model can process text and images with just 24 billion parameters. That’s a fraction of the size of the most advanced models on the market.

Open-source

In a blog post, Mistral explains that compared to its predecessor Small 3, Small 3.1 offers “improved text performance, multimodal comprehension, and an extended token count (128,000)”. Additionally, it can process data at 150 tokens per second. This impressive technical achievement is due to the alternative strategy employed by Mistral. The focus is on algorithmic improvements and training optimization, rather than deploying ever more GPUs for newer models.

GPQA diamond benchmark mistral
Source: Mistral

The fact that Mistral makes its models open-source proves again that the company wants to make AI accessible and doesn’t opt for closed models like those of OpenAI. At the same time, it benefits from the research and development opportunities of the broader AI community. This approach is paying off, as with a value of 5.5 billion euros, it can call itself Europe’s most important AI company.

Mistral 3.1 can be downloaded via Huggingface, through Mistral’s AI API, or on Google’s Vertex AI platform. In addition to Small 3.1, the company recently also launched Mistral Large 2, Pixtral, and Codestral.

read also

‘Le Chat by Mistral AI is thirteen times faster than ChatGPT’