Google expands access to Gemini 2.0 AI models and introduces experimental versions

gemini 2.0

The Gemini model family gains two new experimental versions and the availability of existing models is increased.

In a blog, Google announced that it is making Gemini 2.0 Flash generally available through Google AI Studio and Vertex AI, as well as introducing new experimental models.

Gemini 2.0 Pro and 2.0 Flash Thinking Experimental

In addition to the broader rollout of Gemini 2.0 Flash, Google is launching an experimental version of Gemini 2.0 Pro. This model excels at complex prompts and coding and features a context window (i.e., the amount of text the model can remember) of 2 million tokens. This allows it to process extensive documents and videos and even invoke tools such as Google Search.

In addition, Google has introduced the 2.0 Flash Thinking Experimental model. This experimental model, based on 2.0 Flash Thinking, is optimized for logic and reasoning and can process prompts in multiple steps. It is designed to give users insight into the thought processes behind the generated responses.

2.0 Flash-Lite: Small and cost-effective

Another new addition is Gemini 2.0 Flash-Lite, Google’s most cost-effective AI model. It retains the speed and price of 1.5 Flash, but outperforms the model in most benchmarks. With a context window 1 million tokens and multimodal input, it is ideal for industries such as marketing and retail. Flash-Lite is now being rolled out as a public preview through Google AI Studio and Vertex AI.