Press "Enter" to skip to content

Google Unveils Gemini 2.0 AI Models to Public

Google Unveils Gemini 2.0 AI Models to Public
© Bastian Riccardi

Google has announced the general availability of its Gemini 2.0 AI models, including the updated 2.0 Flash, the experimental 2.0 Pro, and the cost-effective 2.0 Flash-Lite. These models are now accessible through the Gemini API in Google AI Studio and Vertex AI, as well as via the Gemini app on both desktop and mobile platforms.

The 2.0 Flash model is designed for developers seeking low latency and enhanced performance, supporting multimodal reasoning with a context window of up to one million tokens. This makes it suitable for high-volume, high-frequency tasks at scale.

For more complex applications, the experimental 2.0 Pro model offers improved coding performance and the ability to handle intricate prompts. It features an expanded context window of two million tokens and integrates tools such as Google Search and code execution capabilities.

Addressing cost concerns in AI development, Google has also introduced the 2.0 Flash-Lite model. This version provides a cost-efficient alternative while maintaining performance, aiming to make advanced AI more accessible to a broader range of developers.

Additionally, the Gemini app has been updated to include the 2.0 Flash Thinking Experimental model, which enhances reasoning capabilities and connects to services like YouTube, Maps, and Search. This integration allows for more interactive and comprehensive AI-powered research and interactions.

These developments reflect Google’s ongoing progress in advancing AI technology and providing diverse options for developers and users, catering to various needs and budgets.