# The creation of an AI AGI is still far off

February 12, 2026 — Alessandro Caprai

---

In recent months, the debate on Artificial General Intelligence (AGI) has reached almost messianic tones. Between proclamations from visionary CEOs and sensationalist headlines, it seems that the technological singularity is just around the corner, ready to materialize at any moment. Yet, those of us who work daily in the field of artificial intelligence know well that reality is quite different from the narratives dominating public discourse.

AGI, that form of artificial intelligence capable of understanding, learning, and applying knowledge in any domain with the same versatility as human intelligence, remains a distant horizon. Not due to lack of ambition or investment, mind you, but for reasons deeply rooted in the structural and technical limitations of the architectures we today define, perhaps too generously, as "intelligent."

## The mirage of singularity

Talking about technological singularity has become almost a ritual in Silicon Valley. The idea that artificial intelligence can reach and surpass human intelligence, triggering a cycle of exponential self-improvement, fascinates investors and technologists. But this narrative, however suggestive, ignores a fundamental truth: our current AIs are not intelligent in the sense we commonly attribute to this term.

Large Language Models, despite their impressive linguistic capabilities, remain extraordinarily sophisticated statistical prediction systems. They don't understand the world, they don't possess intentionality, they don't build mental models of reality. They process patterns in data with an efficiency that may seem magical, but remain fundamentally anchored to their probabilistic nature.

## The structural limitations we ignore

When we talk about the limitations of current AI, we're not simply referring to matters of computational power or dataset size. The problem is deeper and concerns the very architecture of these systems.

Neural networks transform, they perform mathematical transformations on vectors of numbers. They don't reason in the causal sense of the term, they don't build abstract representations of the world, they don't possess what we might call "situated understanding." When ChatGPT answers a question about physics, it's not applying an understanding of physical principles, it's navigating a probabilistic space of token sequences that statistically associate with correct answers.

This distinction is not academic pedantry. It's the heart of the problem. Human intelligence emerges from a complex interaction between embodied cognition, sensory experience, episodic memory, symbolic abstraction capability, and causal reasoning. Our AIs, however sophisticated, lack almost all of these elements.

## The question of generalization

One of the most commonly used arguments by proponents of imminent AGI is the growing generalization capability of models. It's true, modern systems show surprising transfer learning capabilities, applying knowledge learned in one domain to apparently different situations. But this generalization remains superficial, tied to the statistical similarity of patterns rather than a true understanding of underlying principles.

A three-year-old child can understand the concept of causality, build intuitive theories about the physics of the world, learn the basics of language with a fraction of the data required by an LLM. This efficiency is not just a matter of better algorithms, it's the result of a radically different cognitive architecture, forged by millions of years of evolution.

## The gap between perception and reality

What makes the current debate on AGI particularly problematic is the gap between public perception of AI capabilities and their real nature. When a system like GPT-4 produces texts that seem thoughtful, it's easy to fall into anthropomorphism, attributing intentionality and understanding where only statistical correlation exists.

This confusion is not innocuous. It fuels unrealistic expectations, distorts research priorities, diverts resources and attention from more immediate and solvable problems. Worse still, it creates a sense of inevitability that paralyzes the debate on the ethical and social implications of AI, as if we were helpless passengers on a runaway train toward singularity.

## Real progress and false promises

Don't misunderstand me: the progress in artificial intelligence in recent years has been extraordinary. We've created systems capable of tasks that until recently seemed the exclusive domain of human intelligence. Machine translation, image recognition, coherent text generation—all this represents an undeniable qualitative leap.

But these advances, however impressive, are incremental with respect to the goal of AGI. We're not simply climbing a mountain with increasingly faster steps, we're climbing a mountain while the true peak, AGI, is located on another mountain range, reachable only through paths we don't yet know.

Current architectures, based on deep learning and transformers, have shown their power but also their intrinsic limitations. Learning requires enormous amounts of data, causal understanding is limited, robustness to scenarios unseen in training data remains problematic. These are not bugs to fix with the next version, they are structural features of our current implementations.

## The need for intellectual honesty

As a professional in the field, I feel the responsibility to bring to public debate a dose of intellectual honesty that too often is missing. The hype about imminent AGI serves the interests of those seeking funding or sensationalist headlines, but it doesn't serve public understanding or scientific progress.

We must be clear: we don't know how to build an AGI. It's not a matter of more GPUs, more data, more parameters. We lack fundamental insights on how to replicate essential aspects of intelligence: causal reasoning, situated understanding, efficient learning, robust generalization, not to mention dimensions like consciousness or intentionality.

This doesn't mean AGI is impossible, it means that any prediction about when and how we'll achieve it is premature. It might require fundamental discoveries in neuroscience, new computational paradigms, architectures we can't even imagine today.

## Where we should focus attention

While the chimera of AGI captures the imagination, we risk neglecting more immediate and concrete challenges. Today's AI, however far from being general, already has a profound impact on society. It raises urgent questions of algorithmic bias, privacy, concentration of power, impact on work, disinformation.

These are not concerns for a hypothetical future, they are present problems requiring attention, regulation, research. Obsessively focusing on singularity risks making us lose sight of the concrete challenges AI already poses.

Moreover, there's an enormous amount of work to be done to make current systems more robust, interpretable, reliable, fair. These are technical and ethical challenges that deserve our best energies, even if less glamorous than the promise of superhuman intelligence.

## A more balanced perspective

What's needed, today more than ever, is a balanced perspective on artificial intelligence. Neither catastrophism nor uncritical techno-optimism, but a sober assessment of what these systems can and cannot do, their benefits and their risks.

Artificial intelligence is a powerful tool, perhaps the most powerful our species has ever created. But it remains a tool, created by humans, for human purposes, with human limitations embedded in its design. Recognizing these limitations is not pessimism, it's realism.

The road to AGI, if we ever travel it, will be long and full of surprises. It will require breakthroughs we cannot predict today, overcoming obstacles we perhaps haven't even identified. Meanwhile, we have a powerful but limited AI, capable of transforming industries and societies, but far from being that form of general intelligence that populates our imaginations.

## Conclusion: the virtue of patience

In an era dominated by speed and hype, patience has become a rare virtue. Yet, facing the challenge of AGI, patience is exactly what's needed. Not the passive patience of waiting, but the active patience of rigorous research, methodical experimentation, incremental progress.

The technological singularity may come one day, or perhaps not. But it certainly won't arrive simply by scaling current approaches. It will require something fundamentally different, a revolution not only quantitative but qualitative in our way of conceiving and building intelligent systems.

Until then, we have a duty, as researchers and professionals in the field, to communicate honestly both the possibilities and limitations of this technology. To resist the temptation of hype, to anchor expectations to technical reality, to focus on solvable problems rather than chasing futuristic chimeras.

Artificial intelligence is already extraordinary enough without having to cloak it in messianic promises. Let's celebrate it for what it is, work to improve it, regulate it to protect human values. And recognize, with the humility that science requires, that the path to AGI is still entirely to be charted.