Gemini 2.5 outperformed nearly 130 human teams in a competition for solving complex algorithmic problems.
Google entered its AI model Gemini 2.5 in the International Collegiate Programming Contest (ICPC), where it won a gold medal. Gemini solved ten out of twelve algorithmic puzzles. Only four out of 139 human teams managed to do the same. According to Google, this is “an important step towards general AI.”
Faster and Smarter than almost Everyone
During the competition, human participants were given a 10-minute head start. After that, Gemini 2.5 Deep Think, a version capable of “thinking” for extended periods, began the five-hour session. Within 45 minutes, Gemini had already submitted eight correct solutions, and after nearly 12 hours, the count stood at ten correct answers. Gemini finished second in the overall rankings.
A notable highlight was Problem C: a complex question that no human team solved, but Gemini could. The AI found the correct solution after thirty minutes using dynamic programming and a nested search function.
What’s Next?
According to Google, this demonstrates that AI can not only summarize texts but also handle complex logic and mathematics. The company sees applications in sectors such as semiconductor design and biotechnology, where deep reasoning is crucial. Internal tests reportedly show that Gemini 2.5 would have also won gold in previous ICPC editions (2023 and 2024).
Google doesn’t share figures on energy consumption but emphasizes that such powerful AI is expensive to run. Nevertheless, the company believes that AI capable of cracking incredibly complex problems at this stage can justify those costs.
read also