Mistral has announced the Mistral 3 family, featuring new large language models for cloud and edge environments.
Mistral introduces the new Mistral 3 family. The models, including Mistral Large 3 and the MinistRal series, are all available as open source under the Apache 2.0 license. The series combines large models for demanding workloads with compact variants for local and edge applications. Companies and developers can customize and run the models themselves, from data centers to laptops or embedded systems.
Open source
Mistral Large 3 is a mixture-of-experts model with 41 billion active and 675 billion total parameters. The model has been trained from scratch on approximately 3000 Nvidia H200 GPUs and comes in three variants: base, instruct, and reasoning.
According to Mistral, Large 3 performs comparably to other open instruction models on general prompts. The model supports image input and multilingual dialogues (in languages other than English and Chinese). In the open community ranking LMArena, Mistral Large 3 appears high in the category of non-reasoning open models.
read also
ASML Invests 1.3 Billion Euros in European ChatGPT Competitor Mistral AI and Becomes Largest Shareholder
The MinistRal 3 line focuses on local scenarios. It consists of models with 3, 8, and 14 billion parameters. For each size, there are three variants: base, instruction, and reasoning, each with image comprehension and multilingual support.
Mistral claims that the MinistRal models have a favorable cost-performance ratio, partly because they generate fewer tokens for comparable tasks. For use cases where accuracy is central, the reasoning variants can “think” longer. Mistral cites as an example a score of 85 percent on the AIME ’25 benchmark for the MinistRal 14B reasoning variant.
Mistral 3 is now available on various channels including Azure Foundry, Mistral AI Studio, Hugging Face, or Amazon Bedrock.
