Google launches Nano Banana 2 model with faster image generation

Google today announced the latest version of its popular image generation model, Nano Banana 2. Technically known as Gemini 3.1 Flash Image, this new model can create more realistic images than its predecessor. It will now become the default image generation model in the Gemini app for its Fast, Thinking, and Pro modes.

The company first released the original Nano Banana in August 2025, which led users to generate millions of images in the Gemini app, with particular popularity in countries like India. In November, Google released Nano Banana Pro, a model that allows for more detailed and high-quality image creation.

The new Nano Banana 2 retains some of the high-fidelity characteristics of the Pro model but generates images faster. Google states the model can create images with resolutions ranging from 512 pixels up to 4K, in various aspect ratios.

Nano Banana 2 can maintain character consistency for up to five characters and fidelity for up to 14 objects within a single workflow, enabling better storytelling. Users can also issue complex, nuanced requests for image generation. The model produces media with more vibrant lighting, richer textures, and sharper detail.

With this launch, Nano Banana 2 becomes the default model for image generation across all modes in the Gemini app. Google is also making it the default model for image generation in its video editing tool, Flow. In Search, Nano Banana 2 will become the default for Google Search results via Google Lens and in AI Mode across 141 countries on the Google app and the web, on both desktop and mobile.

For subscribers on Google’s higher-end plans, Google AI Pro and Ultra, the Nano Banana Pro model will remain available for specialized tasks through the regenerate option in the three-dot menu.

For developers, Nano Banana 2 will be available in preview through the Gemini API, Gemini CLI, and the Vertex API. It will also be accessible through AI Studio and the company’s development tool Antigravity, which was released last November.

Google confirmed that all images created with the new model will include a SynthID watermark, which is Google’s method for denoting AI-generated content. The images are also interoperable with C2PA Content Credentials, a standard created by an industry consortium including companies like Adobe, Microsoft, Google, OpenAI, and Meta. Google noted that since launching SynthID verification in the Gemini app in November, it has been used over 20 million times.