When a CFO asks an agent to assemble the board packet, reconcile the final numbers, and call back for approval before releasing the presentation, they are not specifying an access control list. They are expressing intent.

Something has to turn that intent into something a control plane and its enforcement points can evaluate. That conversion is the Mission shaping problem.

In the vocabulary of this series, human intent becomes a Mission Proposal. Human approval turns that into an Approved Mission. The control plane materializes that as a Mission Authority Model. Enforcement points see the resulting authority artifact.

Mission shaping does not replace policy engines, PDPs, PEPs, token services, or authorization servers. It sits upstream of all of them and produces the task-bounded authority artifact they need to make coherent decisions for agentic work.

Many current deployments do not have a disciplined Mission shaping step. The agent gets a token with broad scopes, infers its own boundaries from context, and the system trusts it to stay within them. That is not governance. That is optimism.

This article focuses on LLM-driven, tool-using agents executing multi-step tasks. This is the deployment class where Mission shaping is hardest, the blast radius is largest, and the governance gap between current practice and what safety requires is widest.

But there is a harder conclusion visible even within that argument. Even if deployments had a rigorous Mission shaping step, shaped authority alone would still not be enough for open-world agents.

Mission shaping is the semantic anchor. Containment is the operational resilience layer that has to carry the safety margin when the semantic model is incomplete. Containment means designing the agent’s operating environment so that a failure, compromise, or misaligned action has bounded blast radius through narrow credentials, mediated tools, trusted observation points, and explicit release gates.

So the real question is not just how to shape a Mission from approved intent. It is where the semantic layer is strong enough to bear the weight, and where containment has to carry more of it.

The gap between what a user approves and what a system can govern is not a configuration problem. It is a semantic one. But semantic problems do not disappear just because you shape them into a more structured artifact.

Mission shaping still matters. It gives the system a declared purpose, a bounded authority artifact, a reviewable approval object, and an auditable explanation of what the task was supposed to be. In structured domains, that shaping can take the form of compilation. But containment is what keeps the system safe when the semantic model is incomplete, the runtime is partially observable, or the agent’s behavior drifts inside nominally legitimate authority.

This essay picks up a question left open by Part 4 of the Mission-Bound OAuth series.

A Historical Parallel

The browser world navigated a structurally similar problem a generation earlier. Its failure modes are the closest historical precedent for where agent authorization is now.

In the late 1990s, browsers became the first mainstream open-world agents. A browser could navigate anywhere: internal intranet resources, trusted enterprise applications, and the open internet. Each environment carried different risk. Organizations needed a way to let employees use the browser productively without giving the open internet the same authority as internal systems.

Internet Explorer’s answer was security zones. Local Intranet, Trusted Sites, Internet, and Restricted Sites each had a default capability envelope: what scripts could run, what plugins could load, what downloads were permitted. Administrators could assign URLs to zones via Group Policy and configure zone-level policy centrally.

The mapping to agent authorization is direct:

  • Security zones are authority envelopes. The Internet zone is the minimum-authority default for unknown intent. Trusted Sites is the enterprise template Mission for known, pre-approved task classes.
  • Zone assignment is the classification step. Deciding that a URL belongs in the Intranet zone is the same problem as deciding that “assemble the board packet” belongs in the “structured enterprise workflow” authority class.
  • Group Policy zone configuration is organizational Mission shaping governance. The admin defining what the Internet zone can and cannot do is the admin defining a purpose taxonomy.
  • The default Internet zone posture is the minimum authority principle. Unknown origin, most restricted envelope. Elevate by organizational exception with documented justification.

The browser world also learned the failure modes:

  • Everything ends up in Trusted Sites. Every enterprise application that did not work in the Internet zone got added to Trusted Sites because that was easier than fixing the application or narrowing the zone policy. Templates accumulate exceptions for the same reason.
  • Zone classification alone was insufficient. Modern browsers do not rely primarily on zones. Process-level sandboxing, Content Security Policy, CORS, and same-origin policy are the actual safety mechanisms. Classification is still there, but containment became the primary layer.
  • The escalation mechanism is the attack surface. Users who could manually add sites to Trusted Sites were a persistent governance failure. Agents that can quietly self-promote their authority class are the same failure.

The analogy also shows where agent authorization is harder. URL classification is syntactic: string matching against a domain or pattern. Intent classification is semantic: natural language, open-ended, context-dependent. That difference is not incidental. It is the source of most of what makes agent authorization hard in ways that URL-based zone models never had to confront.

Authority is also time-varying in a way zones never were. A URL stayed in its zone. A Mission Authority Model changes as the task progresses from discovery to execution to external release. The staged, versioned Mission shaping model has no zone equivalent. That is one reason the zone model, however well adapted, cannot be sufficient by itself.

The threat model is inverted. IE’s zone model protected the user from external content attacking through the browser. The agent’s context integrity problem is the agent using its own legitimate authority in service of external adversarial content it encountered mid-execution. Containment therefore needs to operate differently: not just at the boundary between the agent and external systems, but also on what the agent can do after it has processed external content.

Organizations need some mechanism to manage risk while gaining the benefits of open-world agents. The zone model is the right structural intuition: classify intent into authority classes, apply default-restrictive policy, elevate by governed exception. The browser world shows that this is a necessary first step and an insufficient final answer. Containment is what made browsers survivable at scale, and containment is what will make open-world agents survivable too.

What Mission Shaping Means Here

The compiler analogy is useful for describing one version of the job, less useful for describing the full problem. In software, a compiler transforms a formal language with defined semantics into machine-executable instructions. Mission shaping is broader. It is the work of taking a high-level approval and producing something the Mission control plane and its enforcement points can apply mechanically across API calls, tool invocations, and delegation boundaries.

In the structured case, Mission shaping can take the form of explicit compilation. In the more open case, the shaping layer is less formal: a bounded purpose statement, a set of allowed tool classes, and an operating envelope rather than an enumerable action list. That coarser output still provides the governance anchor (the approved authority artifact that the control plane and containment layer can reference) even when it cannot enumerate every permitted action. What changes across cases is precision, not the requirement for an artifact.

The structured-case pipeline looks like this:

Human intent
    ↓
Mission Proposal
    ↓
Approved Mission
    ↓
Mission Authority Model
    ↓
Enforcement

Each step is a transformation. Each transformation is a place where meaning can be lost, distorted, or never captured.

That is the core Mission shaping problem.

A Four-Layer Model

Agent authorization is easier to reason about if four distinct concerns are kept separate. From an IAM perspective, the separation is straightforward. Identity answers who is acting. Mission shaping answers what authority should exist for the approved task. Authorization answers whether a requested action is inside that authority. Workflow and runtime governance answer whether execution should continue under current conditions.

In the vocabulary of the XACML and NIST policy model, the Mission state owner corresponds to the Policy Administration Point (PAP). It authors, versions, and lifecycles the authority artifact. Enforcement points at API and tool boundaries are Policy Enforcement Points (PEPs). The authorization layer is the Policy Decision Point (PDP). Runtime alignment has no direct equivalent in the traditional PAP/PDP/PEP decomposition and belongs to the usage-control tradition discussed below.

LayerQuestionOutputWho
Mission shapingWhat authority should exist for this approved task?Authority artifact: Mission Proposal, Approved Mission, and Mission Authority Model in structured domains; a bounded purpose record and operating envelope in open-world casesMission state owner
AuthorizationIs this specific action inside that authority?Permit, deny, step-up, or suspendEnforcement points at API and tool boundaries
ContainmentIf the authority model is wrong, incomplete, or bypassed, how much damage is possible?Bounded blast radius through narrow credentials, mediated tools, runtime isolation, and explicit release gatesEnforcement architecture, gateways, trusted adapters, and runtime boundaries
Runtime alignmentEven if the action is allowed, is the agent still acting in service of the approved intent?Continue, pause, re-confirm, or terminateMission state owner governing lifecycle decisions, fed by distributed observation points such as gateways, tool adapters, and trusted telemetry sources

Mission shaping creates the authority artifact.

Authorization checks whether a call is inside its bounds.

Containment bounds the damage if the artifact is wrong, incomplete, or bypassed.

Runtime alignment detects the residual risk that remains even when each individual call is technically permitted.

That last layer matters because LLM-driven agents can be corrupted by prompt injection or context taint while still operating inside legitimate authority. Mission shaping defines the ceiling. It does not guarantee the agent is still acting in service of the user’s intent.

The semantic challenge of getting that layer right is the Mission shaping problem. Compilation is one disciplined form of Mission shaping, strongest in structured domains and insufficient in most open ones. The more open the environment becomes, the more the architecture has to lean on containment. The goal is not a system that is always semantically correct. It is a system that remains governable when the semantic model is wrong: survivable incorrectness rather than an unachievable semantic guarantee.

The family resemblance to continuity-of-use or usage-control thinking is intentional, including the UCON tradition (Park & Sandhu, 2004). Authority should not be treated as a one-time admission decision, but as something that remains subject to ongoing conditions while the task continues. Agents make that continuity problem harder. The system is not only asking whether a session should continue, but whether an adaptive, tool-using, partially observable actor should continue under the same authority envelope. The trust scores and budgets introduced in Part 2 are one operational form of that continuity principle.

UCON was technically coherent but never achieved deployment at scale. The enforcement infrastructure required to act on continuous conditions simply did not exist in 2004. What is different now is that LLMs make natural-language intent tractable as an input to authorization systems, the zero trust and workload identity infrastructure exists as a practical enforcement layer, and regulatory pressure to govern autonomous system behavior is pushing organizations to take runtime authorization seriously in a way that has no parallel in the UCON era.

Why Mission Shaping Is Hard

Semantic Ambiguity

Natural language intent is inherently ambiguous. “Assemble the board packet and call me back before release” could authorize pulling draft forecasts, reading prior board materials, reconciling finance data, contacting the FP&A system, or notifying the legal team that a release is pending. Or it could authorize only gathering materials and presenting a summary for review.

What gets approved is rarely just what the user asked for. In enterprise systems it is usually a constrained intersection of requested purpose, system design, business role, and organizational policy.

The shaping step has to convert that ambiguous intent into something a resource server can evaluate deterministically. In the structured case, that means a compilation step. Either way, it requires a purpose taxonomy: a shared vocabulary for what “board packet preparation” means in terms of actual API operations and data access.

No such shared taxonomy exists. Every implementation invents its own.

The failure pattern this creates is familiar from RBAC role explosion and ABAC attribute normalization. Both required organizations to maintain shared vocabulary mappings that accumulated exceptions, became inconsistently applied, and expanded past what any single team could govern. A purpose taxonomy for agent tasks follows the same organizational dynamics.

The taxonomy is also a governance artifact: it encodes organizational policy into machine-evaluable terms and is subject to its own scope creep, organizational drift, and capture by convenience. Who governs the purpose taxonomy itself is a question the architecture must answer.

OpenID AuthZEN is an approved OIDF standard for the PEP-to-PDP query interface. It standardizes how enforcement points ask for authorization decisions once subject, resource, action, and context are already structured enough to evaluate. Mission shaping is the step that produces those stable inputs. For open-world LLM-driven agents, the resources may not be enumerable at policy-writing time, the attributes may not map cleanly to natural-language intent, and the policy for a novel task class may not exist at all. AuthZEN and downstream policy engines still require that upstream work to be done.

The Verification Problem

Even if you have a Mission shaping step, how do you verify that the resulting authority model represents what the user actually approved?

With code, a compiler translates a formal language with defined semantics. You can inspect the output. You can write tests. You can formally verify properties of the compiled artifact.

With intent, you are translating natural language. The input has no formal grammar. The compiler has no type system to catch semantic errors. The authorizing user cannot inspect the Mission Authority Model in any meaningful way. They approved something in human terms. The compiled artifact is in machine terms. The mapping between them is opaque.

A CFO who approved “assemble the board packet and call back before release” has no way to verify that the compiled authority model does not also permit the agent to access unrelated HR records, treasury operations, or investor communications, if the purpose taxonomy the system uses is broader than they understood.

The verification gap is not theoretical. It is the difference between informed consent and a terms-of-service checkbox.

You can reduce that gap with better UX:

  • a natural language summary generated from the compiled artifact rather than the original proposal
  • progressive disclosure of the effective permission envelope
  • callback points for the highest-consequence transitions
  • a consent receipt that records what the user actually approved

But complex enterprise workflows quickly exceed what an approving human can meaningfully reason about from a summary. So the real goal is not perfect human comprehension. It is better bounded comprehension.

The gap also worsens as adoption scales. The first task templates are reviewable by a careful operator. After a purpose taxonomy grows to hundreds of templates with layered pre-approved expansion paths, verifying that any specific compiled output correctly represents the original approval intent becomes progressively less tractable. Organizations that use the architecture most heavily face the worst verification fidelity, exactly backwards from what a governance model should produce.

The LLM Trust Problem

Many deployments that do have a shaping step use an LLM to perform it. That is a practical approach. It is also a governance problem. The first point of adversarial influence is wherever the Mission Proposal is generated: whether that is the agent itself, a form-to-proposal translation layer, or an orchestration system.

An LLM-based compiler is not deterministic. The same intent statement can produce different authority models on different runs even in the absence of adversarial input. It can also be influenced through the prompt. It has no formal semantics. And its output is the input to your enforcement system.

That matters even before you get to prompt injection. If two benign runs produce materially different authority envelopes, reproducibility, operator review, and audit all become unstable. Prompt injection makes that worse. For high-assurance governance, LLM-first Mission shaping should therefore remain a proposal step wrapped by bounded review and approval, not the sole authoritative step.

One architectural response is to narrow the input surface of the shaping LLM to trusted sources only. IBAC takes this approach: the intent parser operates exclusively on the user’s message and a pre-resolved trusted contact store, explicitly excluding any external data sources the agent has already touched. That does not make the parser deterministic, but it removes the adversarial content path from the shaping step. Parser errors can then over-scope or under-scope, but they cannot be driven by injected content encountered during execution.

The Context Integrity Problem

Even with a perfectly shaped Mission Authority Model, an agent operating in an adversarial environment can be redirected: not by exceeding its authority, but by using its legitimate authority in service of someone else’s intent.

Prompt injection is the concrete case. The agent is processing a document, an email, or an API response. That content contains adversarial instructions designed to redirect behavior. The enforcement point sees calls that fall within the shaped authority envelope. Nothing is flagged. The user’s approved intent has been subverted without a single authorization failure.

This is not a problem that shaped authority models solve. It is a problem they cannot observe.

Several partial defenses are possible. Isolating the shaping step before the agent reads external content prevents adversarial inputs from influencing the authority model, but only for tasks whose execution path is knowable upfront. Intent anchoring cryptographically binds the approved purpose to the authority artifact and creates a reference point for post-hoc analysis. It does not prevent context taint inside that envelope. Call provenance logging helps reconstruct what happened after the fact. Human confirmation gates on irreversible actions limit the damage an injected instruction can accomplish even when it successfully redirects the agent.

But these are not enough on their own. Prompt injection is where containment has to carry more weight than Mission shaping. There is no semantic layer that can observe what happens inside the agent’s context window.

Context integrity failures compound the runtime alignment problem. A semantically intact Mission Authority Model offers no signal that the agent has been redirected. Every call looks authorized, every action is within bounds, and the alignment signal remains flat. The only layer that can interrupt is containment: mediated tool calls that block categories of action regardless of the instruction that triggered them, and human confirmation gates that fire on irreversible operations before they complete. An agent that cannot exfiltrate data without going through a logged, rate-limited outbound adapter is less useful to an injected instruction even if the instruction successfully redirected the agent’s behavior.

Consider a less structured case than the CFO workflow: an agent asked to research vendors, summarize options, and begin outreach where appropriate. That task crosses external content, third-party tools, evolving selection criteria, and ambiguous action boundaries. A Mission shaped for that task can still provide a governance record and some outer bounds. But the real safety properties are more likely to come from mediated outreach, short-lived credentials, sandboxed browsing, and explicit release gates before any external communication is sent. This is the kind of open-world case where containment is not a complement to Mission shaping so much as the more reliable layer.

Staged Mission Shaping

Isolating the shaping step is the right principle, but it cannot be applied as a single upfront step for most real enterprise tasks.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
+------------------+
| Human intent     |
+------------------+
        |
        v
+------------------+
| Mission Proposal |
+------------------+
        |
        v
+------------------+
| Approved Mission |
+------------------+
        |
        v
+------------------+
| Mission state    |
| owner            |
+------------------+
    |
    +--> Stage 1: discovery
    |     |
    |     +--> enforcement
    |     |
    |     +--> discovered facts
    |             |
    +<------------+
    |
    +--> Stage 2: execution
          |
          +--> enforcement
          |
          +--> irreversible action
                    |
                    v
              +------------------+
              | Callback /       |
              | re-approval      |
              +------------------+
                    |
                    v
              +------------------+
              | Mission state    |
              | owner            |
              +------------------+

Many tasks are not fully knowable at the moment of initial approval. An enterprise agent may need to inspect draft spreadsheets, read finance-system metadata, compare version history, or discover which supporting systems are implicated before it can propose the next safe action.

If the entire Mission Authority Model must be shaped before any external content is seen, the result will often be too narrow to be useful. If the agent is allowed to inspect arbitrary external content before shaping, the isolation guarantee collapses.

That tension is why structured enterprise workflows need a staged model rather than a single compile-once event. In the structured case, staged Mission shaping can include staged compilation.

A staged model looks like this:

  1. Shape a narrow discovery envelope. Enough authority to inspect the immediate problem space: read access to relevant systems, no write operations, no external communications.
  2. Shape a fuller execution envelope from discovered facts. Still bounded by the original approved purpose, but now informed by what the agent found in Stage 1.
  3. Require explicit expansion, callback approval, or re-confirmation for anything outside the execution envelope. Novel actions, high-risk operations, and scope expansions all require returning to the Mission state owner.

Each stage transition produces a new versioned authority artifact. The Mission state owner, the authoritative lifecycle service for Mission state, is the only party that can issue a new version. In some deployments that is an AS-resident Mission service. In others it is a separate authority service. The architectural requirement is not placement. It is that one system remains authoritative for Mission versions, stage transitions, and lifecycle state. That centralization makes the Mission state owner both the governance anchor and a single point of failure.

The Mission state owner is the authoritative lifecycle system for Mission state: the component that owns Mission versions, stage transitions, runtime budget state, and suspension or termination decisions.

If it is unavailable during an active Mission, stage transitions stall, trust budget exhaustion cannot be acted on, and governed resumption cannot proceed. Availability and resilience requirements for this component are not an afterthought.

The staged model also has to handle re-planning when discovery changes the task materially, downgrade paths when later facts narrow rather than expand the task, and attenuation as well as expansion.

This clarifies where Mission shaping is strongest: structured workflows with clear stages, observable transitions, and legible approval points. In those domains, shaping can take the form of staged compilation. It is conceptual framing, not a full shaping protocol, but it is enough to see where the model holds and where it starts to fray.

Recent work on semantic task-to-scope matching in delegated authorization can be read as one concrete instance of this structured case: an authorization server semantically constrains scopes to the task at hand rather than trusting the agent’s requested scope set. That is Mission shaping in a narrower, more enumerable form. A complementary approach enforces at the tool-invocation boundary rather than at token issuance: IBAC and CaMeL parse intent into structured permission tuples before execution begins and check every tool call against them deterministically. Token issuance enforcement constrains what authority is granted before the agent runs; tool-invocation enforcement constrains what the agent can do with that authority at every step. Both are instances of the structured case. Neither addresses what happens when authority is shaped correctly but the agent operates across multiple organizations, long-running tasks, or delegation hops where the original approval is long behind the execution.

In the most repeatable cases within that structured domain, shaping can go one step further and emit a governed task artifact: a reviewed script, workflow, or executable routine tied to a specific Mission template and task class. That does not mean the artifact carries standing elevated privilege. It means future Missions can invoke a versioned, signed, pre-approved behavior instead of re-deriving the same logic from natural language every time. The authority still comes from the current Mission context. Reusing the artifact is safe only if its inputs, side effects, and invocation conditions remain narrowly bounded.

Seeing where staged Mission shaping works makes the gap with what most deployments actually do more visible.

What Most Deployments Do Instead

Relatively few current deployments perform Mission shaping as a first-class step. In practice, many systems fall into one of five patterns, including some that look like compilation but are not.

PatternWhat it doesWhy it falls short
Scope enumerationPre-enumerates allowed scopes or APIs at authorization timeWorks only for closed-world tasks where the full execution path is known in advance
App-local policy mappingMaps recognized intents to internal permission bundlesNot portable, usually opaque to users, and breaks at organizational boundaries
LLM-generated structured requestUses an LLM to draft authority inputsUseful as a proposal step, but probabilistic, non-deterministic, and vulnerable to tainted inputs
Broad credential plus prompt disciplineGives the agent wide authority and relies on the prompt to constrain itSelf-governance, not governance; the agent’s own judgment becomes the enforcement boundary
Policy languages (XACML, OPA, Cedar)Express access control policy in a formal languageDesigned for enumerable resources and actions; adapting them to open-world intent requires exactly the semantic work that makes Mission shaping hard

The most common real-world deployments land in the third or fourth pattern. The third pattern has a more disciplined form worth noting separately: systems like IBAC and ASTRA use an LLM to extract intent but enforce the result deterministically, either at the tool boundary or at token issuance. They reduce the taint vulnerability.

They do not address whether the extracted intent correctly represents what the user approved — that is the shaping problem this article is about. They are structured enforcement architectures that still depend on an upstream intent-extraction step, not substitutes for solving the shaping problem itself.

The majority let the model infer its own boundaries or give the runtime broad authority and rely on local discipline not to misuse it.

When the system never explicitly converts approved intent into a bounded authority artifact, the agent’s own judgment becomes the effective policy engine. That is not a governance model. It is the absence of one.

In effect, the agent becomes the place where requested purpose, system design, business role, and organizational policy are reconciled. That reconciliation should be a governance function, not a runtime improvisation.

None of these patterns include containment as an intentional layer either. Containment is usually present as an accident of deployment (the agent happens to be running in a restricted environment) rather than as a first-class design decision.

The concrete example that follows shows the staged model working. The failure modes that arise even within it are the subject of Part 2. In deployments running these patterns, they arrive faster and without a governance boundary to limit the damage.

A Concrete Example

The following example assumes a single organization, known systems, and a structured approval path: the conditions under which staged Mission shaping is most tractable. The interoperability complications that arise at organizational boundaries do not apply here.

The CFO says:

Assemble the board packet, reconcile the final numbers, and call me back before releasing anything externally.

That is not yet authority. It is intent. It contains at least four different governance questions:

  • what systems the agent may inspect
  • what changes, if any, it may make while reconciling numbers
  • what must wait for callback approval
  • what counts as “releasing anything externally”

A plausible Stage 1 Mission Authority Model might look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
  "mission_version": 1,
  "allowed_resources": [
    { "type": "finance.report", "selector": "period == 2026-Q1" },
    { "type": "finance.forecast_version", "selector": "period == 2026-Q1" },
    { "type": "board.material", "selector": "classification in ['draft', 'internal']" }
  ],
  "allowed_operations": [
    "finance.read",
    "finance.compare",
    "board.read_internal"
  ],
  "constraints": [
    { "type": "callback_required_before_release", "value": true },
    { "type": "mission_stage", "value": "discovery" }
  ],
  "forbidden_operations": [
    "board.publish_external",
    "notification.send_external",
    "finance.approve_final"
  ]
}

If discovery finds only expected variance, the Mission state owner issues version 2. The delta is deliberate. mission_stage advances from discovery to execution, board.packet_draft is added as an allowed resource, and write operations (board.write_internal_draft, board.assemble_packet) are unlocked. The version number increments so enforcement points holding a version 1 token reject it as stale. The CFO’s callback requirement remains in force. board.publish_external stays forbidden until the CFO confirms. When the callback succeeds, the Mission state owner issues version 3 with external publication unlocked. That is an approval event, not a local agent decision.

This is the domain where Mission shaping is tractable: one organizational trust boundary, known systems, a structured approval path, and a control plane that can tolerate transition friction. Even here, the hidden complexity is real. Selector correctness, stage propagation to distributed enforcement points, token freshness, and sequencing for partially-completed write operations all have to be solved.

One design decision every implementation will face is whether the token carries the Mission Authority Model inline or carries only a mission_ref that enforcement points resolve at call time. Inline tokens enable offline enforcement and reduce latency, but they become stale immediately when the Mission state owner issues a new version. Reference-based tokens stay current but create an availability dependency on the Mission state owner at every enforcement point. Neither choice is obviously correct. The right answer depends on revocation latency requirements and the acceptable blast radius if the Mission state owner is temporarily unavailable. Enforcement points resolving a reference token are performing a form of token introspection (RFC 7662) against the Mission state owner. The Mission Proposal can also be submitted to the authorization server with integrity guarantees using Pushed Authorization Requests (RFC 9126) before the interactive approval flow begins.

That tractability does not generalize to open-world environments.

Part 1 stops here because this is the strongest case for Mission shaping: one enterprise, known systems, legible stages, and a control plane that can absorb governance friction.

The harder question starts after that. A well-shaped Mission can still fail under quiet scope expansion, delegation, headless execution, stale state, and open-world runtime redirection. That is where containment, runtime alignment, and survivable incorrectness become the more important story.

Part 2: Mission Shaping Is Not Enough picks up from there.