Try the travel app

Deliberation-in-the-loop

Collective mandates
for agentic AI.

Deliberate helps groups articulate, contest, and assent to a shared mandate an AI agent can actually follow. The result is an agent that acts on a decision, not an assumption.

v0.4 protocolApril 2026Research preview
Nine voices → one mandatelive

The Ridley family trip

Mandate

Coastal, 5 nights, Jun 12–17. Budget ≤ £2,400. Mom prefers rail, Theo needs step-free access. Split cabins over shared villa.

Assented · 4 / 4

01 / Manifesto

Why it exists

AI agents are increasingly asked to act on decisions that belong to groups.

A family's trip. A team's hire. A neighbourhood's plan. Today, agents often act on one person's prompt and infer through everyone else. One voice ends up speaking for many who never got to agree.

Deliberate gives the group a way to produce an assented, legible mandate first. The agent then acts on what people actually chose together, not on what one of them assumed on everyone's behalf.

“An agent is only ever as accurate, safe, and legitimate as the mandate it acts on.”

02 / Positioning

What we are

A deliberation-in-the-loop layer between groups and AI agents.

The output is a collective mandate. Not a summary or a transcript — a structured artefact an agent can act on and a group can read and change.

What changes in practice

The point is not to stuff more detail into a prompt. It is to give the agent shared rules for acting, trading off, and asking - so when the world changes, it knows what belongs to its discretion and what belongs back with the group.

Simplified trace

Plan a five-night Lisbon trip for two - £1,700 total, flights already booked.

Clearer delegation

Brief-only agent

A thin brief gives the agent taste and direction, but not much guidance for what to do when the plan stops being straightforward.

Thin brief
01

Starts from a broad brief: relaxed pace, good food, a comfortable stay, not luxury.

brief received

02

Builds a plan that seems to fit the vibe and still looks affordable.

first plan works

03

Books the core stay and a few anchor moments for the trip.

trip assembled

04

The booked stay falls through shortly before departure.

same disruption

05

A replacement exists, but it costs more and changes the shape of the trip.

authority unclear

06

Stops and asks, because the brief never clearly said whether this kind of upgrade was allowed.

decision returns to people

Gets most of the way there, then hits an authority gap.

The problem is not that the agent is useless. It can plan. The problem is that when conditions change, the group has not clearly said what it may trade off or decide on its own.

Shared-mandate agent

The fuller mandate does not micromanage the trip. It gives the agent shared rules for acting, trading off, and asking only when it should.

Shared mandate
01

Starts from shared rules: stay inside budget, keep a buffer, and favor a homely stay over pure centrality.

rules locked

02

Builds the same kind of plan, but only inside the rails the group already agreed on.

within bounds

03

Keeps meaningful budget untouched for food, flexibility, and late surprises.

buffer protected

04

The booked stay falls through shortly before departure.

same disruption

05

Compares the alternatives against the agreed tradeoff: calm and character beat a busier fallback if the buffer still holds.

tradeoff applied

06

Swaps the stay without overreaching, because the mandate already says what it can protect and what it can trade.

plan recovered within budget

Acts without overreaching.

A better mandate does not script every move. It tells the agent what to protect, how to choose when values pull apart, and when a real human check-in is still required.

Step 01 / 06

Based on a real Lisbon pilot trace, simplified for public readability.

A good mandate does not script the whole plan. It makes the agent's discretion legible.

03 / Theory

Why it should work

Agent performance is upstream of the mandate.

Most AI work focuses on making agents better at evaluating options and carrying them out. Less attention goes to where the options come from in the first place. On complex, contested, or collective questions, this is where the biggest gains are waiting.

A biological analogy makes the point.

Moss

Random diffusion

Climbs eventually, slowly. Its proposals are unbiased — it tries everywhere with equal weight.

Unbiased proposals

Climbing plant

Directed tendrils

Grows upward by default, remembers where it's been, senses light, and locks in on support. Its proposals are already well targeted.

Biased proposals

An agent acting on a raw prompt is closer to moss. A structured, assented mandate is closer to tendrils. The facilitator biases the group toward the productive region of the preference space, the structure keeps resolved ground from being retread, and the mandate itself becomes the stable surface the agent can extend from.

Read the essay

Research preview

Start with a real trip, or open the mandate pilot.

Deliberate Travel is the live front door today. The mandate pilot is where we are testing shared instruction-setting for agents more directly.