← All essays
Written by Ashwin Rajan7 min read

The Accountability Restructure

Your peers are restructuring around AI. The 55% who already regret it are telling you something — and it isn't that they moved too slowly.

If you are a leader being asked to greenlight an AI-driven restructuring, defend an AI roadmap on an earnings call, or explain to a town hall why some teams are flying with these tools and others are stalling — this essay is for you. There is a gap opening up between the AGI story being sold upward to boards and the advanced-narrow reality being lived downward by the people doing the work. That gap is where the expensive mistakes are happening right now.

We've made the philosophical case for why these systems are structurally narrow elsewhere — see Embodied Minds, Narrow Machines. Here's what that means if you're the one signing the org chart.

The story your peers are telling is already breaking

The dominant executive narrative of the last eighteen months has been some version of we are restructuring around AI. Headcount comes down, compute goes up, the business case closes on labour substitution. It is a clean story. It also is not surviving contact with the operating reality.

Uber burned its full-year AI budget in four months. Gartner's read of the cohort that cut headcount for AI is that they are not outperforming the cohort that didn't. The NVIDIA VP responsible for the compute these companies are buying has said out loud that the compute now costs more than the employees it was meant to replace.

80% of companies that cut jobs for AI saw zero improvement in returns. 55% of employers already regret AI-driven cuts. Gartner expects half of them to rehire under new titles by 2027.

CommBank reversed 45 AI-driven layoffs after realising the roles weren't redundant — a quiet, public, expensive admission that the business case had been built against a system that didn't exist. None of this is an argument against AI. It is an argument against the wrong AI. The systems your peers are restructuring around are not the systems they think they are restructuring around.

What's actually shipping is advanced narrow intelligence

The honest label for current systems — frontier LLMs, agentic stacks, the lot — is advanced narrow intelligence. Extraordinary inside well-defined domains with sufficient training data. Brittle the moment the distribution shifts, the edge case appears, or the genuinely novel situation arrives. The jagged edge of error is not a temporary engineering problem to be patched out in the next release. It is a structural property of how these systems work.

What this lets you say out loud, in a board update or a town hall: we are deploying advanced narrow intelligence — exceptional inside its lane, structurally unaccountable outside it. We are designing accordingly. That sentence ages well. We are restructuring around AI does not.

Why uneven adoption is the signal, not the failure

One of the most common complaints we hear from leaders is that AI adoption is uneven — pockets of enthusiasm, pockets of refusal, no coherent picture of where these tools actually belong. The instinct is to read this as a change-management problem and respond with mandates, dashboards, and adoption targets.

It is the wrong read. Uneven adoption is exactly what you should expect from advanced narrow intelligence. These systems fit some workflows and not others. They multiply capacity in tasks with stable structure, abundant training data, and tolerable error tolerances; they create fragility in tasks where the distribution is shifting, the stakes are high, or accountability is non-negotiable. Even adoption is not the goal — even adoption is the warning sign. Uber's four-month budget burn is what horizontal mandates look like in production.

The leadership move is workflow-fit diagnosis, not adoption velocity. Where on your map of work do narrow systems pay off? Where do they introduce risk you can't see until the jagged edge cuts? That question gets you a defensible roadmap. Roll it out everywhere by Q3 gets you Uber's number.

Why your best people are pushing back

The second pattern leaders misread is internal resistance. When a senior engineer, a senior clinician, a senior analyst pushes back on an AI rollout, the default interpretation is fear of change — stubbornness dressed up as principle. Sometimes that is what it is. Often it is not.

More often, the resisters are noticing exactly the accountability gap an executive briefing has not yet made room for. They can feel that the system is structurally unable to carry the consequences of its own decisions in their domain, and they are the ones who will be left holding the bag when it goes wrong. They usually do not have the vocabulary for it. The CommBank reversal is what happens when leadership ships before that vocabulary catches up: the resisters were right, the roles were not redundant, and the reversal was expensive and public.

Treat senior pushback as signal, not friction. The question to ask is not how do we get them on board? but what are they seeing about the accountability surface that our business case isn't? The answer is almost always specific, useful, and free.

You cannot delegate accountability

Even the most capable agentic system — one that can write, analyse, decide, and act — cannot be held accountable. Accountability is not a feature you can ship. It is structural. Human institutions, from a two-person team to a regulated enterprise, run on a system of mutual accountability: someone is always answerable, someone always has something at stake.

An AI can draft a better memo than most humans. An editor still has to stand behind it. An AI can run a financial model faster than any analyst. A portfolio manager is still accountable when it is wrong — and it will sometimes be wrong, because no system is correct a hundred percent of the time. The jagged edge is permanent. The human in the loop is not a formality; it is the locus of judgment the institution actually runs on.

You can delegate a decision to an AI. You cannot delegate accountability for that decision.

This is the load-bearing reason the headcount-substitution business case keeps falling over. You are not removing a worker; you are removing the carrier of accountability for that workflow. The work of carrying accountability does not disappear. It moves — usually upward, usually onto people who do not yet know it has landed on them.

The story that survives contact with reality

Jensen Huang put it more usefully than most: a $500,000 engineer should be consuming $250,000 of tokens. Token budgets, in his framing, are becoming the fourth leg of compensation. That is the right shape for the business case. AI is a multiplier on people who can carry accountability, not a substitute for them. The companies that will look good in 2027 are the ones whose 2026 narrative was we are reconfiguring around advanced narrow systems and the people who direct them — not the ones rehiring under new titles to undo last year's cuts.

What this lets you say out loud: we are using advanced narrow intelligence to multiply the capacity of the people who are accountable for outcomes. We are not using it to replace them, because the system cannot carry what they carry. That is a story a CFO recognises, a town hall can absorb, and an earnings call can defend. It also happens to be true.

What to stop doing, what to start doing

Three things worth stopping.

  • Restructure-around-AI narratives — the cohort telling them is already underperforming and already regretting the cuts.
  • Headcount-cut business cases — they are priced against a system (general, accountable AI) that is not the one you are deploying.
  • Horizontal adoption mandates — they guarantee Uber's burn rate without Uber's upside.

Three things worth starting.

  • Workflow-fit diagnosis — a deliberate map of where narrow systems multiply capacity and where they introduce fragility.
  • Accountability mapping — for every workflow you are putting AI into, name the human who carries the consequence and design the loop around them, not around the model.
  • Capacity-multiplier framing — make the business case in tokens-per-person, not heads-removed.

These are behavior design questions and organisational design questions before they are technology questions. The leaders who treat them that way are the ones whose AI story will still be standing in eighteen months.

References

  1. Uber CTO on burning the 2026 AI budget in four months — Benzinga summarising The Information, Apr 2026.
  2. NVIDIA VP Bryan Catanzaro on compute costing more than employees — Fortune, Apr 28 2026.
  3. Gartner on AI-driven layoffs not delivering returns (around 80% of orgs piloting autonomous tools) — Gartner press release, May 5 2026.
  4. Gartner forecast that half of companies cutting customer-service staff for AI will rehire by 2027 — Gartner press release, Feb 2 2026.
  5. 55% of businesses regret AI-driven redundancies — Orgvue survey, Apr 29 2025.
  6. Commonwealth Bank reverses 45 AI-driven job cuts — ABC News, Aug 21 2025.
  7. Jensen Huang on $500K engineers consuming $250K of tokens — Business Insider, Mar 17 2026.

Want to talk about this?

If something here resonated, challenged you, or sparked a counter-argument — we'd love to hear it. We read everything.