Vol. I · Issue 1 · May 2026 · Singapore By Boon Kgim Khur
ZENAI
← AI Agents for Marketing & Operations

Expert in the Loop: the only AI operating model that compounds.

The thesis behind every workshop and every project on this site. Why the junior-employee model fails — and what replaces it.

The default mental model for AI deployment in 2026 is wrong. Most companies treat agents like junior employees — delegate a task, audit the output, fix it, repeat. The model is intuitive because it maps to how we already manage humans. It also fails, predictably, in three ways.

Failure 1 — juniors escalate; agents guess. A junior employee asks when they’re not sure. An AI agent guesses. The cost of that guess shows up downstream: brand voice drift, the wrong customer pulled from a CRM, a hallucinated stat in a board deck. By the time a senior catches it, the cost has already compounded.

Failure 2 — the audit eats the leverage. If a senior reviews every output, the agent isn’t saving time — it’s shifting work from drafting to reviewing. For most knowledge work, reviewing is harder than drafting because you’re context-switching, not flowing. Companies report hours saved while their seniors quietly drown.

Failure 3 — the model doesn’t scale. You can’t 10× your output by 10×ing the agents, because each agent needs senior oversight at the same rate. The cost curve goes up faster than the throughput.

The alternative — the model I’ve used to ship at Hashmeta, nanogent, and Learn Parrot — is Expert in the Loop.

The expert stays in the loop. The agent handles the surface area. Drafting, scanning, ranking, reformatting, escalating — these are surface area. Judgment about which draft to ship, which customer fits, what to write next — that’s the loop.

The mechanics are three:

  1. Stop-points are designed in. Every workflow has 1–3 stop-points where the expert is required. Not optional.
  2. Skills are auditable. Each agent skill is a function with a contract — input, output, success condition. You can read it; you can change it; you can replay it.
  3. The expert’s judgment shapes the next loop. Each stop-point either approves the output or feeds back into the skill. Over time the skill compounds. The expert’s judgment compounds with it.

What it looks like in practice:

  • Hashmeta SEO. Strategist agent runs. I review the brief at stop-point 1 — 90 seconds. Writer agents run. I review the final draft at stop-point 2 — 5 minutes. The system publishes. Throughput: 10 articles/week. Senior time: ~1 hour/week.
  • nanogent customer support. Triage agent runs. I review the escalation queue at stop-point 1 — 10 minutes/day. Resolution agents run for green-flagged tickets. Stop-point 2 only fires on novel categories. Throughput: 100+ tickets/day. Senior time: ~30 minutes/day.

The key word is compounds. The junior-employee model is linear: every doubling of output requires roughly a doubling of audit. Expert in the Loop is non-linear: as the skills mature, the expert’s time per output drops. By month 6, the expert is shaping new skills, not auditing old ones.

This is also why AI is an amplifier, not an equaliser. Expert in the Loop only works if there’s an expert. A novice running the same setup doesn’t get leverage — they get speed-to-error. The thesis cuts both ways: companies with expertise compound; companies without it churn through tools.

What to do this week:

  • If you’re a domain expert running a junior-employee-style deployment, redesign one workflow with stop-points and skill contracts.
  • If you’re hiring before you’ve tried Expert in the Loop, you’re paying a tax.
  • If you’re a non-technical operator, CC-1 — Foundations of Claude Code is the four-hour version of this article with your hands on the keyboard.

The model isn’t new; the thesis isn’t new. What’s new is that the agent stack is finally good enough to make the loop close in a single afternoon. So close it.