Artificial general intelligence is not arriving on the next model release. It is not arriving on the one after that. The confusion between what AI actually is and what we imagine it to be has real consequences — for how organizations build with it, deploy it, and think about trust, accountability, and risk. At Fabric, we work on the design of behavior — human, organizational, and increasingly the behavior of AI inside human contexts. Here is our honest read on where things stand.
Intelligence isn't a free-floating thing
There is a long-running debate in cognitive science about whether intelligence is computational — something you could, in principle, run on any sufficiently powerful substrate — or whether it is inherently embodied. We sit firmly in the embodiment camp, and the evidence for it is overwhelming.
Human intelligence did not evolve in a vacuum. It evolved in a body, moving through a physical world, navigating social relationships, feeling hunger and fear and belonging and loss. Our capacity for language, abstraction, planning, and empathy all grew out of that substrate. Intelligence is not separable from the kind of thing you are and the kind of world you live in.
The brain is not a computer running a program. It is a biological organ shaped by millions of years of solving problems that only matter if you have a body and other people who can hurt you or help you.
This is not an abstract philosophical point. It has direct implications for what current AI systems can and cannot do — and for why calling them "general" intelligence is a category error.
Intelligence tends to solve the easiest problem available
Something often gets missed in discussions about AI capability: intelligence — biological or artificial — is not primarily about solving the hardest problem. It is about solving the easiest one that gets the job done. Evolution does not design for elegance. It designs for survival, which means exploiting whatever shortcuts are available.
Current AI systems are very good at identifying and exploiting statistical shortcuts in data. That is impressive, and useful. But it also means that what looks like deep understanding is often pattern completion — sophisticated, yes, but brittle in ways that human intelligence is not. When the distribution shifts, when the edge case appears, when the genuinely novel situation arrives, the limits show up fast.
Real generalization — the kind that lets you apply reasoning from one domain to a structurally different one — is rare across the entire history of intelligent systems, biological or artificial. Specialists dominate. Generalists are the exception.
Robots have bodies. That doesn't make them embodied.
One of the more seductive arguments for AGI optimism is the rise of robotics. Surely a robot that can navigate physical space, manipulate objects, and operate in the real world is acquiring the embodiment that current LLMs lack?
Not really. Or at least, not in any way that closes the gap meaningfully.
The human body is not just a locomotion platform. It is a processing system of staggering complexity. At any given moment, you are running thousands of parallel processes — proprioception, temperature regulation, social signal reading, immune response, hormonal modulation, emotional processing — none of which are conscious, and all of which shape how you think, decide, and relate to others. The sensory data you filter, compress, and act on every second represents a computational load that current robotic systems are not close to matching.
A robot that can pick up an object in a warehouse is solving a narrow version of a narrow problem. It is not experiencing the world. It is parsing it through a thin slice of sensors optimized for a specific set of tasks. The gap between that and human embodied intelligence is not a gap of degree. It is a gap of kind.
AI has no skin in the human game
This is the most important point for anyone thinking about how to deploy AI in real organizations, with real stakes.
Even the most capable agentic AI system — one that can write, analyze, decide, and act — cannot be held accountable. And accountability is not optional in human affairs. It is structural. Human interactions, at every level from interpersonal to institutional, run on a system of mutual accountability. Someone is always answerable. Someone always has something at stake.
An AI can draft a better article than most humans. But a human editor still needs to curate it, stand behind it, and defend the rhetorical choices when another human pushes back. An AI can run a financial model faster and more thoroughly than any analyst. But a human portfolio manager is still accountable when the model is wrong — and it will sometimes be wrong, because no system produces correct outcomes a hundred percent of the time. That jagged outcome edge is permanent. It requires a human in the loop, not as a formality, but as a genuine locus of judgment.
You can delegate a decision to an AI. You cannot delegate accountability for that decision.
This connects back to embodiment. Even a robot that can navigate physical space does not have a relationship with the humans around it in any meaningful sense. It cannot feel the weight of a commitment. It cannot be shamed, trusted, betrayed, or relied upon. The social fabric of human institutions — and Fabric is a company that thinks a lot about fabric, in every sense — is woven from exactly those relational threads. AI, however capable, is not part of that weave. It is a tool that operates within it.
There is also the question of explainability. Part of what it means to be accountable is being able to explain yourself — to articulate why you made the decision you made, in terms another person can interrogate, challenge, and understand. Current AI systems are getting better at generating explanations, but those explanations are often post-hoc rationalizations of opaque processes. That is a problem not just for governance, but for the trust architecture that human collaboration depends on.
So what are we actually dealing with?
Advanced narrow intelligences. That is the honest label. Not general intelligence. Not a replacement for human cognition. Very capable systems that, within well-defined domains with sufficient training data, can outperform humans on specific tasks — sometimes dramatically so.
That is not nothing. It is a remarkable set of tools. The mistake is not in using them. The mistake is in misunderstanding what they are — and consequently, misallocating trust, authority, and accountability in ways that create fragility rather than resilience.
Where things get genuinely interesting — and genuinely unpredictable — is at the intersections. When you combine multiple narrow intelligences across domains, emergent capabilities can appear that were not legible from examining any single system. Those intersections are worth watching closely and designing around carefully. The possibilities are real. They are not the same as general intelligence, and conflating the two leads to both overconfidence and misplaced fear.
What this means for how we design
At Fabric, we think the most productive frame for working with AI is not replacement but reconfiguration. Decide which human decisions actually benefit from AI augmentation. Map where the jagged edge of AI error creates unacceptable risk. Name where accountability is non-negotiable, and design the human-in-the-loop deliberately rather than as an afterthought. Build systems where the AI handles the pattern-matching and the human keeps the judgment. These are behavior design questions, organizational design questions, and — at the deepest level — questions about what we think intelligence actually is and what we want it to do for us. The answers matter. AI may well help us reach them — but only if we shape the AI to do that work, instead of waiting for it to arrive on its own.