Mistral has updated its open-source coding model; this version is said to be twice as fast as the previous one.
Mistral has updated its generative AI model for coding Codestral to version 25.01. It promises in a blog post that it will be the “clear leader for coding in its class” and that it is twice as fast as the previous version.
Focused on coding
Codestral is optimized for low latency and high frequency, and supports tasks such as code enhancement and test generation. The model supports 80 programming languages, so it can be used by numerous developers. According to Minestral, Codestral 25.01 scores highly in coding in Python, as the model achieved 86.6 percent on the HumanEval test. That test simulates commonly encountered coding problems in the Python programming language.
This version will be available to developers in the IDE plugin partner program. They can implement Codestral 15.1 locally through a code assistant, or they can use the model’s API through Mistral’s La Plateforme and Google Vertex AI. The model is available as a preview on Azure AI Foundry and will soon be available on Amazon Bedrock.
read also
Mistral doubles speed Codestral
Code writing was one of the earliest uses of AI models, even in larger models such as OpenAI’s o3 and Anthropic’s Claude. In the past year, specialized code models experienced a big jump in knowledge, and they continue to evolve continuously.