☆mir Busy☆mir Busy
← Back to Writing
·5 min read

The Delegation Stack

agentsinfrastructureai-nativedelegation stack

I have another take I’d like to debunk. After writing about Delegation UI, I keep seeing a similar doomer sentiment everywhere: “apps are dead,” “agents are going to replace apps.”

I don’t think that’s what’s happening. Apps aren’t dying, we’re in a transition where their role is shifting. So I think it’s worth cutting through the panic and noise and taking a closer look.

Here’s the mental model I’ve been working with. I think we’re seeing 3 layers take shape in what I’m calling the Delegation Stack:

1. The Orchestration Layer

This is where the agent lives. It talks to you, understands what you want, breaks it into tasks, and coordinates across applications to get it done. The human is here, at the top, giving direction.

Think of it as the part of the stack that handles intent.

You can already see it taking shape — OpenClaw, NVIDIA’s NeMo Agent, Anthropic’s Dispatch. Each one a different take on the same idea: an agent that sits above your apps and acts on your behalf across all of them.

2. The App Layer

Apps aren’t dead. But the assumption baked into almost every app is.

The assumption that the primary user of your software is a human, navigating it manually. That no longer holds.

Every traditional software surface (websites, mobile apps, SaaS tools, enterprise platforms) is now being asked to serve two kinds of users: the humans who’ve always been there, and the agents acting on their behalf. Most of them were never designed for the second kind.

This is what I mean when I say the role is shifting. The service underneath doesn’t disappear. But to stay relevant, it needs to become legible to agents, not just humans. That means structured outputs, machine-readable states, APIs that expose intent-level actions and not just UI flows a human can click through.

There’s also an economic dimension here that I don’t think gets enough attention. My intuition is that an agent using an existing app, tapping into years of encoded domain logic, validation, and workflows, is significantly cheaper in tokens than asking it to reason through the same problem from scratch. The app becomes a shortcut. That’s part of why agent skills exist: a SKILL.md tells the agent exactly what a tool does and when to use it, so it doesn’t have to figure it out through expensive trial and error. The apps that invest in becoming agent-legible aren’t just surviving, they’re becoming more valuable infrastructure in an agent-first world.

Some incumbents will make this transition. Others won’t. The ones that can’t adapt will be replaced by AI-native competitors who build for agent legibility from day one, the same way Encarta, Microsoft’s CD-ROM encyclopedia, didn’t survive Wikipedia. The category survived. The product didn’t. (Shout out all my millennials)

This is also where the Delegation UI argument goes one level deeper. If the interface paradigm is shifting from navigation to delegation, the infrastructure underneath has to shift too.

Delegation UI is what the human sees. The App Layer becoming agent-legible is what has to exist underneath it for any of that to work.

3. The Enforcement Layer

This is the part that’s only beginning to be built seriously, and almost never inside the application itself, where it actually matters.

Oversight primitives baked into applications. Gates the agent still has to pass through before an irreversible action commits. They live at the transaction layer, not the navigation layer.

An agent can bypass a nav menu. It can skip screens, ignore flows, go straight to the action. But it can’t bypass a gate that sits directly on the action itself. That’s transaction-level enforcement the agent cannot route around.

Worth being clear here: not every vertical needs this. A fully automated image pipeline doesn’t need a gate. But anywhere an agent can move money, send a communication, or modify a record that can’t be undone, that’s where the enforcement layer is the difference between useful autonomy and a liability.

The human going async changes everything

Computer-use agents like OpenClaw started this. You delegate a task and the agent goes off and executes it. Same thing with Claude’s dispatch feature; you text your computer from your phone, put it in your pocket, and walk away.

The agent works while you’re in a meeting. While you’re asleep.

The original assumption behind any oversight model is that someone is watching, that assumption is already broken.

This means the enforcement layer can’t just be a blocking modal that waits for a click. The human isn’t there to click. The gate has to find them, push a notification, send an out-of-band request, reach them wherever they actually are. The approval has to come to the human, not the other way around.

Why the enforcement layer gets more important as agents get better

The instinct is to think: smarter agents, less oversight needed. I think that’s exactly backwards.

More capable agents can initiate more consequential actions. The scope of what they can do, and what they can get wrong, scales with their capability. The Orchestration layer gets better, the App Layer adapts, and the enforcement layer is the only part of the stack that stays accountable to human intent at the moment it actually matters.

If it doesn’t exist, nothing else in the stack compensates for it.

Anthropic hints at this in the safety guidance around Dispatch-style workflows, warning that mistakes can cascade into actions that are difficult or impossible to undo. I read that less as a generic disclaimer and more as a sign that autonomous execution has outpaced application-level enforcement in sensitive domains.

The irreversibility problem doesn’t go away when the human walks away from the screen… you guessed it, It gets worse!

So yeah the stack is already here. The question is whether the enforcement layer gets built before the market learns why it was necessary.