Google Ships Gemini 3.1 Pro With Double the Reasoning Power — Plus Computer Use, Multimodal Embeddings
Google released Gemini 3.1 Pro in February 2026, delivering more than double the reasoning performance of Gemini 3 Pro with a 1-million-token context window and 77.1% on ARC-AGI-2. The model features multimodal reasoning across text, images, audio, video, and code. Google also launched Computer Use support in Gemini for the first time, matching Anthropic's capability, alongside a new multimodal embedding model supporting text, image, video, and audio inputs. Gemini 3 Flash became the default model in the Gemini app, providing PhD-level reasoning at lightning speed. In Workspace, Gemini now powers AI Overviews in Drive search and enhanced collaboration in Docs, Sheets, and Slides. Google is transitioning from Google Assistant to Gemini across its ecosystem throughout 2026.
The technical leap represented by Gemini 3.1 Pro is particularly notable in its reasoning capabilities. The 77.1% score on ARC-AGI-2 represents the highest reasoning benchmark ever achieved by a commercial model, surpassing the previous record held by OpenAI's o3 system by a meaningful margin. Google DeepMind achieved this through a combination of improved chain-of-thought training, synthetic data augmentation, and a novel attention mechanism that allows the model to maintain coherence across extremely long contexts. The Computer Use feature enables Gemini to interact with desktop applications, web browsers, and operating system interfaces — a capability that Anthropic pioneered with Claude but that Google has now matched with tighter integration into Chrome OS and Android devices. Early enterprise testers report that Gemini's Computer Use reduces manual workflow time by an average of 35% for repetitive digital tasks.
Google's broader strategy of embedding Gemini across its entire product ecosystem gives it a distribution advantage that no other AI lab can match. With over 2 billion users across Gmail, Drive, Docs, and other Workspace products, Google can deploy Gemini capabilities to an audience that dwarfs the user bases of standalone AI chatbots. The transition from Google Assistant to Gemini on Android devices alone will bring frontier AI capabilities to approximately 1.4 billion active smartphones, fundamentally changing how consumers interact with their devices. The multimodal embedding model is also a strategic asset, enabling enterprise customers to build unified search and retrieval systems that span text documents, images, video archives, and audio recordings — a capability that positions Google Cloud as the preferred platform for organizations with diverse, unstructured data.
Sources
Google AI Blog, 9to5Google, TechCrunch