# Gemini 4: First Leaks Arrive February 17, 2026 — Alessandro Caprai --- # Gemini 4: The First Leaks Are Here In the world of artificial intelligence, early hints matter almost as much as official announcements. And when it comes to Google, every clue left in the code, every rumor down the hallway carries considerable weight. That's how we find ourselves talking about Gemini 4, a model that officially doesn't exist yet, but which in the rooms where AI's future is decided is already the subject of animated discussions. We're not looking at a press release or a presentation on a well-lit stage. We're in the territory of leaks, source code analysis, and insider speculation. Yet it's precisely this nebulous context that offers us a fascinating glimpse into where artificial intelligence is headed, and what Mountain View's true ambitions are. ## Awaiting a New Generation If we follow the pace at which Google has released previous generations of Gemini, a clear pattern emerges. The company has adopted a substantially annual update cycle, which would suggest that Gemini 4 could see the light between late 2026 and early 2027. Of course, in Silicon Valley calendars are written in invisible ink, and timelines can change based on a thousand variables: from competitors accelerating, to technical challenges slowing things down, to strategic considerations that overturn plans. But beyond the dates, what really matters is the direction. And the direction emerging from the Gemini 4 rumors is unequivocal: we're moving toward an artificial intelligence that no longer just responds, but acts. ## From Conversational to Agentic If I had to identify the most significant paradigm shift that Gemini 4 seems to promise, I'd find it in this word: agent. No longer an assistant that waits for our questions to formulate answers, however sophisticated. But a digital entity capable of taking initiative, executing complex sequences of actions, navigating different systems to achieve goals we define in general terms. Booking a flight no longer means receiving a list of options to choose from. It means saying "I need to be in Milan Thursday morning" and finding yourself with a booking made, a hotel selected based on your historical preferences, and perhaps a reminder already added to your calendar. Managing emails doesn't mean receiving suggestions on how to respond, but finding responses already sent, meetings already organized, priorities already filtered. It's an enormous conceptual leap. We're moving from intelligence as a tool to intelligence as a collaborator. And this raises questions that go well beyond technology: how much control are we willing to delegate? How do we define the boundaries of this delegation? What transparency and oversight mechanisms must we build? ## Auto Browse: When AI Navigates for Us Among the most intriguing features to emerge from code analyzed by curious developers is "Auto Browse." The name is self-explanatory, but the implications are profound. We're talking about an AI that can open Chrome, navigate between tabs, perform searches, scroll through pages, extract information, compare content, all autonomously. This isn't simply an extension of search capabilities. It's AI entering our daily digital workspace and inhabiting it as we would. It's an assistant that doesn't just suggest links, but follows those links, evaluates content, proceeds with navigation until it has completed the assigned task. Think about the practical implications: market research conducted autonomously across dozens of sites, product comparison across multiple platforms, aggregation of information scattered across different sources. But also think about the ethical implications: an AI that navigates leaves traces, consumes content, potentially impacts site metrics. How does this fit into the web ecosystem? ## Physics as a New Cognitive Frontier Perhaps the most fascinating aspect among those that have emerged concerns what we might call "physical understanding." Gemini 4, according to rumors, could integrate physical world modeling capabilities, understanding object movement, cause-and-effect relationships in videos, spatial and temporal dynamics. This isn't just an incremental evolution. It's AI beginning to build mental models of the real world, understanding that if a glass falls it breaks, that if you push a ball it rolls, that an object hidden behind another doesn't cease to exist. These are concepts that for us humans are so basic they seem trivial, but for a machine represent a qualitatively different form of intelligence. Project Mariner seems to be the code name behind this capability. And if the leaks are accurate, this could open unprecedented scenarios: from robotics that can finally anticipate the consequences of its actions, to automatic video editing that understands the physical narrative of a scene, to security systems that identify anomalous behaviors because they violate expected physical laws. ## The Mind-Boggling Numbers And then there are the parameters. Those technical specifications that in AI have become a sort of competition for who has the biggest. The speculation, unconfirmed and therefore to be taken with all due caution, speaks of over 100 trillion parameters for Gemini 4. To provide context: GPT-4 is estimated to have about 1.7 trillion parameters. Gemini Ultra, in its current iteration, probably moves in a similar or higher range. Talking about 100 trillion means imagining a leap of two orders of magnitude. Is it credible? Is it necessary? Is it sustainable? The truth is that the race for parameters is both a significant metric and a misleading fetish. More parameters allow capturing more complex patterns, memorizing more knowledge, handling more nuanced tasks. But they also entail exponential computational costs, prohibitive training times, energy consumption that raises serious environmental questions. Google, like the other AI giants, is probably exploring architectures that don't simply aim to inflate the number of parameters, but to use them more efficiently. Mixture of experts, sparse attention, conditional computation: these are all techniques that allow having enormous models that only activate the relevant parts for each specific task. ## What All This Really Tells Us Beyond the technical details, beyond speculation about release dates, there's a broader narrative emerging from these leaks. And it's a narrative about the maturation of artificial intelligence as a technology. We're leaving the era of AI as a novelty, as a demonstration of possibilities. We're entering the era of AI as infrastructure, as a system on which to build experiences, services, products. Gemini 4, if it will be what the rumors suggest, won't be so much a model that does new things, as a model that does things in a new way. The distinction is subtle but crucial. It's no longer about amazing with unprecedented capabilities, but integrating those capabilities into the fabric of our digital life so smoothly they become almost invisible. The best AI, paradoxically, might be the one we notice least, precisely because it works. And perhaps this is the real leak we should consider: not the technical details of Gemini 4, but the indication of where the sector is going. Toward an artificial intelligence that's less displayed and more integrated, less conversational and more operational, less reactive and more proactive. ## The Questions That Remain Open Of course, between rumors and reality there's always a space of uncertainty. Google could change direction, rename the project, modify priorities. The leaks could be accurate, partial, or completely misleading. It's the very nature of early information in such a competitive and fast-paced sector. But even assuming every single detail emerged so far proves inaccurate, the general trajectory is clear. Artificial intelligence is evolving from a consultation tool to an action partner. And this entails technical, ethical, and social challenges we'll have to address collectively. How do we ensure an AI agent operates according to our values? How do we maintain transparency about what it does autonomously? How do we balance efficiency and control? How do we protect privacy and security when we delegate such broad tasks? These are questions that don't have simple answers. And that will probably accompany not only the release of Gemini 4, but the entire next phase of artificial intelligence evolution. For now, we continue to observe the signals, interpret the leaks, imagine the possibilities. Because if there's one thing these years of AI revolution have taught us, it's that the future arrives faster than we can predict it. And often in forms we hadn't even imagined.