Google rolls out Gemini 3.1 Pro for complex reasoning
Google has begun rolling out Gemini 3.1 Pro, an upgraded version of its flagship AI model designed for more complex reasoning tasks, across consumer, developer and enterprise products.
The model is available in preview for developers through the Gemini API in Google AI Studio. It also appears in Gemini CLI, Google Antigravity and Android Studio. Enterprise customers can access it in Vertex AI and Gemini Enterprise, while consumers can use it through the Gemini app and NotebookLM.
Google described Gemini 3.1 Pro as a step forward in what it calls core reasoning for the Gemini 3 series, intended for cases where a direct response is not enough and users need multi-step reasoning and synthesis.
Benchmark claims
Google highlighted performance on ARC-AGI-2, a benchmark that tests whether a model can solve unfamiliar logic patterns. The company said Gemini 3.1 Pro achieved a verified score of 77.1% and described the result as more than double the reasoning performance of Gemini 3 Pro.
The release follows a recent upgrade to Gemini 3 Deep Think, which Google has linked to work in science, research and engineering. It described Gemini 3.1 Pro as the "upgraded core intelligence" behind those results.
Developer rollout
For software teams, the preview places the model across several parts of Google's development stack. Gemini API access via AI Studio offers a standard path for integrating it into applications. Gemini CLI provides a command-line interface, while Android Studio extends availability to mobile development workflows. Google Antigravity is listed as an agent-focused development platform where the model can be tested.
Google is shipping Gemini 3.1 Pro across its consumer and developer products, framing the broad rollout as a way to bring the same underlying model improvements into everyday software use.
Product examples
Google's examples focused on design, coding and data visualisation. One described generating animated SVG graphics from a text prompt, producing "website-ready, animated SVGs" and emphasising that the output is code-based rather than pixel-based.
Another example described configuring a public telemetry stream to build "a live aerospace dashboard" that used telemetry to visualise the International Space Stations' orbit.
In an interactive design example, Gemini 3.1 Pro generated code for a 3D starling murmuration. Google said the experience included hand-tracking and audio that changed based on the simulated birds' movement.
Google also described a creative coding scenario based on literature in which the model generated a modern personal portfolio inspired by Emily Bronte's Wuthering Heights. It said the interface aligned with the novel's tone.
Consumer access
In the Gemini app, Gemini 3.1 Pro is rolling out with higher usage limits for people on Google AI Pro and Ultra plans. NotebookLM access is limited to Pro and Ultra users, according to Google.
The staged rollout reflects a broader pattern in the AI market, where model upgrades often reach paying subscribers and developers first as vendors balance demand with the cost of offering advanced models at scale.
Path to availability
Google released Gemini 3.1 Pro in preview as it continues validation and development, pointing to "ambitious agentic workflows" as a focus area. It expects to make the model generally available soon.
Google also cited user feedback and the pace of progress since the release of Gemini 3 Pro in November as drivers of the update.