☆mir Busy☆mir Busy
← Back to Writing
·4 min read

Delegation UI

uiagentsagentic experience

"Frontend/UI is dead" is a sentiment that's been growing with every new AI coding tool that ships.

I think that framing misses what's actually happening.

What's actually "dying" is navigation-first UI, the assumption that an interface exists to help a user drive software manually. That assumption held for thirty years. It no longer holds with AI agents.

Two UI paradigms have defined how humans interact with software but we're watching a third emerge in real time.

Command line UI (1960s–1984): You told the machine exactly what to do. The interface was literally just a prompt.

Navigation UI (1984–present): You drove software through menus, flows, and screens. The interface was a vehicle. This is the paradigm that's actually "dying".

Delegation UI (2023–): You describe an outcome and an agent pursues it. The interface is no longer a vehicle you drive, now it's more like a cockpit you monitor. So instead of clicking through steps, you're setting intent, reviewing decisions, and maintaining oversight.

Think about booking a flight. Under Navigation UI, you open a browser, go to a site, enter dates, filter results, compare options, click through a checkout flow. The interface is a little vehicle you are driving through the whole process.

Under Delegation UI, you tell an agent: "Find me the cheapest flight to London next Thursday that gets in before noon, and book it if it's under $600." The agent searches, compares, and acts. Your job is no longer to drive. Your job is to make sure it doesn't do something you didn't intend.

That's a completely different relationship with software. And it needs a completely different kind of interface.

Let's go back to that cockpit analogy real quick! From what I understand a cockpit doesn't give you direct control over every system, instead it gives you instrumentation.

You're not out there moving the flaps manually lol

But you have readings that tell you what the plane is doing and controls that let you intervene when something's wrong. The point is to give the pilot the right information at the right moment to catch what the autopilot can't.

So I think that's exactly what Delegation UI needs to be: A monitored transfer of execution, with the human still accountable for the outcome. Not a handoff.

Every component library currently out there has been built for the navigation world. But delegation UI has none of that; no equivalent vocabulary, no shared answer to what "a thing that lets you trust an agent" even looks like.

And that gap has real costs. Right now, developers building agentic products would probably wing it by wiring together basic modals and confirmation dialogs, UI patterns designed for human-initiated actions, and hoping they work for agent-initiated ones. If they even think about oversight to begin with.

At the time of writing this, there's no standard way to show a user what an agent is about to do before it does it. No established pattern for letting someone pause a multi-step run mid-execution. No component that distinguishes between "this agent action is reversible" and "this one isn't." Developers are making up bespoke solutions to problems that every agentic product is going to face.

That's what a missing vocabulary looks like in practice.

Sooo I decided to tackle this by building an open source React component library for delegation UI called Depute (verb meaning to appoint or delegate).

Things like a way to approve or reject an agent's next move before it happens. A way to see what tools it's using and why. A way to stop it mid-run. A way to watch multiple agents coordinate in real time and know which ones are doing what. None of that exists in any component library today.

The Agentic experience field is early and the components don't exist yet.

depute.dev open source, zero dependencies.