Google launches Gemini 2.0: the foundation for agent experience

Google today launches its latest model Gemini 2.0, which should be the basis for agent experiences. Its first family member has also been announced: Gemini 2.0 Flash.

Google is launching its latest AI model Gemini 2.0, which Google says is the most advanced AI model to date, with multimodal capabilities such as native image and audio output, and integrated use of tools. This should form the basis for agent experiences that can plan, remember and act with your guidance.

The first member of the Gemini 2.0 family has also already been announced: Gemini 2.0 Flash. This model is available as an experimental model to developers via the Gemini API in Google AI Studio and Vertex AI. Users worldwide can already use the chat-optimized experimental version of 2.0 Flash. In January, the model will be made generally available, with multiple model sizes.

Gemini 2.0 Flash

Google today launched its first model within the Gemini 2.0 family: Gemini 2.0 Flash. This model offers low latency and improved performance. Through the Gemini API in Google AI Studio and Vertex AI, developers can get started with this model right away.

read also

Google launches Gemini 2.0: the foundation for agent experience

In addition, users worldwide can already use a chat-optimized experimental version of 2.0 Flash. You can easily select it from the drop-down menu on desktop and in the mobile version.

Gemini 2.0 Flash builds on the success of 1.5 Flash, Google’s currently most popular model for developers. According to some key benchmarks, Gemini 2.0 Flash is said to outperform 1.5 Flash, including double the speed.

read also

Gemini Live now available in Dutch

But there is more to this new model. In addition to support for multimodal input such as images, video and audio, 2.0 Flash now supports multimodal output such as directly generated images mixed with text and steerable text-to-speech (TTS) multilingual audio.

‘Agentic’ experience

In addition to launching its latest Gemini model, Google is also emphasizing the importance of responsible AI development. “We believe responsible AI development should happen from the beginning. To test how agent experiences can work safely and practically, we are introducing a number of research prototypes and experiments to our trusted testing community,” Google said.

read also

Google launches Gemini 2.0: the foundation for agent experience

Those research prototypes include Project Astra, a universal AI assistant with multimodal reasoning capabilities; Project Mariner, a prototype focused on complex human interactions via Gemini 2.0; Jules, an experimental AI coding agent integrated into GitHub workflows; and domain-specific agents that support both the virtual world of video games and robotics.

newsletter

Subscribe to ITdaily for free!

  • This field is for validation purposes and should be left unchanged.