New Small Language Models from Microsoft Combine Efficiency with High Level of Mathematical and Scientific Reasoning
Microsoft launches three new AI models: Phi-4-reasoning, Phi-4-reasoning-plus, and Phi-4-mini-reasoning. They add powerful reasoning capabilities to compact architectures, making them suitable for use on PCs, laptops, and even mobile devices.
Compact, yet Capable
The flagship, Phi-4-reasoning, has 14 billion parameters and is designed to perform complex tasks with an accuracy that can compete with other large models. The ‘plus’ variant uses reinforcement learning to process up to 1.5 times more tokens. This increases response time and computing power.
Phi-4-mini-reasoning, with 3.8 billion parameters, is optimized for educational purposes and runs on devices with limited computing power. The focus here is on mathematical calculations.
Built with Smart Data
“The models were trained on synthetic lesson data, generated by, among others, DeepSeek-R1 and OpenAI’s o1-mini and o3-mini,” Microsoft writes in a blog. “Especially Phi-4-mini was presented with more than a million mathematical problems, from high school to doctoral level, including step-by-step solutions to learn the reasoning process.”
According to Microsoft, the new models perform better than OpenAI’s o1-mini and DeepSeek1-Distill-Llama-70B on Ph.D.-level benchmarks. Phi-4-reasoning-plus even outperformed the much larger DeepSeek-R1 model (671 billion parameters) on the AIME 2025 math test, according to the tech giant.
The models are now available through Azure AI Foundry and Hugging Face.