External executor bridge (HTTP)
Status: Contract and integration guidance. The http path uses the existing HTTP adapter; the optional bridge adapter (Phase 3) is off by default and only active when the host enables it — it does not change default core-
External executor bridge (HTTP)
Status: Contract and integration guidance. The http path uses the existing HTTP adapter; the optional bridge adapter (Phase 3) is off by default and only active when the host enables it — it does not change default core-only runs or IR semantics for programs that do not use bridge.Post.
See also: docs/operations/EXTERNAL_ORCHESTRATION_GUIDE.md describes orchestrator → AINL (discover, submit, run). This file describes AINL → external workers when the host uses a generic HTTP boundary.
Integration preference (read this first)
For OpenClaw / NemoClaw / ZeroClaw agents, prefer the existing MCP server (ainl-mcp).
That path is purpose-built for workflow-level integration with MCP-compatible hosts. Entrypoint: scripts/ainl_mcp_server.py (CLI: ainl-mcp). Exposure profiles: tooling/mcp_exposure_profiles.json. OpenClaw-oriented quickstart: AI_AGENT_QUICKSTART_OPENCLAW.md. OpenClaw skill + ainl install-openclaw (~/.openclaw/openclaw.json): docs/OPENCLAW_INTEGRATION.md. ZeroClaw skill + ~/.zeroclaw/ bootstrap: docs/ZEROCLAW_INTEGRATION.md.
Use this HTTP bridge pattern for generic external executors (Zapier-style webhooks, internal microservices, bespoke fan-out gateways, CI callbacks, or any worker that is not exposed as MCP). It is the secondary integration style relative to MCP for OpenClaw-family stacks (including ZeroClaw when using MCP).
1. Purpose
AINL’s runtime already delegates I/O to allowlisted adapters. The http adapter is the stable, canonical way to call out from a graph without adding new R syntax: build a JSON payload in the frame, then R http.Post … to a configured URL.
This document defines a small JSON contract so many backends (plugins, agents-as-a-service, internal tools) can sit behind one or more HTTP endpoints while AINL programs stay portable and deterministic at the graph level.
2. When to use the HTTP bridge
| Situation | Prefer |
|-----------|--------|
| OpenClaw / NemoClaw / ZeroClaw agent driving AINL tools from an MCP host | MCP (ainl-mcp) |
| Third-party SaaS, legacy REST service, internal queue worker | HTTP bridge (http.Post + contract below) |
| One gateway that fans out to N executor types | HTTP bridge (single URL; gateway routes by executor id) |
3. Request envelope (recommended)
Machine-readable schema: schemas/executor_bridge_request.schema.json (JSON Schema 2020-12). Python helper: schemas/executor_bridge_validate.validate_executor_bridge_request (call from gateways or tests when you want a shared check).
AINL reuse: compile-time include modules/common/executor_bridge_request.ainl — set ainl_bridge_request_json to the JSON text, then Call …/bridge_req/ENTRY to parse once (see module header).
Hosts should accept a JSON body shaped like:
{
"run_id": "opaque-correlation-id",
"step_id": "graph-node-or-label-hint",
"executor": "string-executor-id",
"payload": {},
"timeout_s": 30
}
| Field | Meaning |
|-------|--------|
| run_id | Correlates logs across AINL runner, bridge, and worker. |
| step_id | Optional hint tying the call to a label or node in the IR (for tracing). |
| executor | Logical name the bridge maps to a concrete plugin/worker (fan-out routers use this). |
| payload | Executor-specific JSON; keep it serializable and bounded. |
| timeout_s | Hint for the worker; AINL-side http timeouts should still be set explicitly on the R step. |
Programs may omit fields the bridge does not need, but stable names help shared tooling.
4. Response envelope (align with http adapter)
Align with the monitoring-oriented HTTP result envelope described in docs/reference/ADAPTER_REGISTRY.md (§ HTTP adapter — result envelope): executor-specific data lives in the decoded body; transport success/failure uses ok, status_code, error, etc., as returned by the runtime’s http adapter for that call.
Bridge implementations should return normal HTTP status codes (e.g. 2xx for handled requests, 4xx/5xx for client/server errors) so AINL graphs can branch on resp without special cases.
5. Configuration and security
- Base URLs, API keys, and mTLS belong in host / runner configuration, not in public example repos.
- Grant
http(and the target host) only via capability allowlists on programs that need outbound calls (capabilities.allowin IR / runner policy). - Treat bridge endpoints as privileged: authenticate, rate-limit, and validate
executoragainst an allowlist on the server. - Request shape: Prefer validating inbound JSON against
schemas/executor_bridge_request.schema.json(or callschemas.executor_bridge_validate.validate_executor_bridge_requestfrom Python). A permissive pattern is to validate only when the decoded body is a dict and includesexecutor(sollm.classify-style bodies without that field stay loose).
6. Multi-backend support on the bridge
A single bridge HTTP service (one base URL that AINL calls with http.Post or bridge.Post) can fan out to many concrete workers. The contract field executor (or, with the optional bridge adapter, the configured executor key) is the stable routing key; the bridge maps it to an internal queue, RPC, second HTTP hop, or plugin process.
Recommended properties of the bridge:
- Maintain an allowlisted map
executor_id → backend(reject unknown ids with 4xx). - Keep audit logs keyed by
run_idandstep_id/node_idfrom the request envelope. - Return responses that still follow the HTTP result envelope expected by the AINL
httpstack (§4).
Example routing pseudocode (illustrative):
on POST /v1/execute:
body = read_json()
ex = body.executor
if ex not in ROUTES: return 400
backend = ROUTES[ex] # e.g. URL, queue name, or handler id
result = dispatch(backend, body.payload, deadline=body.timeout_s)
return 200 with JSON body suitable for the client (and normal status codes on failure)
Flask-shaped sketch (illustrative): the same routing idea maps cleanly onto a small route table. This is not production-complete (auth, body size limits, structured errors, and real dispatch are omitted).
# Illustrative Flask-shaped routing only — not production-complete.
from flask import Flask, request, jsonify
app = Flask(__name__)
# executor_id -> backend handle (downstream URL, queue name, plugin id, etc.)
ROUTES = {
"plugin.alpha": "https://worker.internal/alpha",
"plugin.beta": "queue:beta-jobs",
}
def dispatch(backend, payload, timeout_s):
# Enqueue, second HTTP hop, or in-process handler; cap concurrency per route here.
return {"echo": payload, "via": str(backend)}
@app.post("/v1/execute")
def bridge_execute():
body = request.get_json(force=True, silent=False)
if not isinstance(body, dict):
return jsonify({"error": "expected JSON object"}), 400
ex = body.get("executor")
if ex not in ROUTES:
return jsonify({"error": "unknown executor"}), 400
out = dispatch(ROUTES[ex], body.get("payload"), body.get("timeout_s"))
return jsonify(out), 200
AINL remains unaware of how many backends exist; it only sees one outbound call per step.
7. Resource contention & capacity
On the bridge (operator responsibility):
- Use a queue or job system when workers are slower than request arrival; cap max concurrency per executor and globally so a burst of AINL runs cannot exhaust workers or downstream APIs.
- Enforce timeouts on each backend call; surface failures as HTTP 5xx or 4xx consistently so graphs can branch or fail predictably.
- Apply rate limits at the bridge (and per-executor) to protect shared infrastructure.
On the AINL side (built-in limits):
http/bridgetimeouts — Forainl run,--http-timeout-ssets the client-side wait for eachhttp.Post/bridge.Post(default 5 seconds). Executor JSON may carry a largertimeout_sfor the gateway’s downstream call; if the AINL client gives up first, you still see a transport timeout. LLM-heavy graphs (e.g. OpenRouter classify) typically need 60–120+ seconds. The referenceapollo-x-botscripts use 120 andAINL_HTTP_TIMEOUT_S; seedocs/reference/ADAPTER_REGISTRY.md§2.4.3 andapollo-x-bot/README.md(troubleshooting). Per-calltimeout_sonR http.Post …(§2.1 slot schema) overrides the adapter default when provided.llm.classifyenvelope vs legacy (gateways) — If a worker implements both legacy classify (server builds prompts fromtweets[]) and envelope classify (OpenAI-stylemessages[]), treat envelope mode as active only whenmessagesis a non-empty list. A bareclassify_response: "raw"(or similar) withoutmessagesshould not force envelope handling or spuriousenvelope_missing_messageserrors; fall back to legacy. Implemented inapollo-x-bot/gateway_server.py(_classify_wants_envelope).- Graph resource ceilings —
RuntimeEngineinruntime/engine.pyenforces limits such asmax_steps,max_depth,max_adapter_calls,max_time_ms,max_frame_byteswhen set on the engine or via runner/MCP policy (seedocs/operations/CAPABILITY_GRANT_MODEL.mdfor how grants merge limits, and the rootREADME.mdsecurity overview for default conservative ceilings on runner/MCP surfaces).
Together, bridge-side queuing and AINL-side limits prevent a single workflow from spawning unbounded outbound work or tying up the runtime.
8. Progressive implementation (no breaking changes)
- Docs (this file) — contract + MCP-first positioning.
- Examples (Phase 1) —
examples/integrations/executor_bridge_min.ainlposts a minimal envelope tohttp://127.0.0.1:17300/v1/execute(change for your environment). Local mock:python3 scripts/mock_executor_bridge.py. Seeexamples/integrations/README.md. The sample branches withX http_status get resp statusthenIf http_status=200 ->…so the condition matches graph-friendlyIfsemantics (avoid raw(core.ne resp.status …)in the condition slot for graph-preferred runs). - Tests (Phase 2) —
tests/test_executor_bridge_integration.pyruns the Phase 1 example against an in-process HTTP mock (no live network, no manualmock_executor_bridge.py).tests/test_executor_bridge_envelope.pycovers the schema helper. Optional: add an app-local integration test that drives your bridge gateway + main graph in dry-run mode. - Optional
bridgeadapter (Phase 3) —R bridge.Post <executor_key> <body_var> ->respresolves<executor_key>to a URL via host config (ainl run --enable-adapter bridge --bridge-endpoint key=URL, or runneradapters.bridge.endpoints). Same response envelope ashttp.Post. Seedocs/reference/ADAPTER_REGISTRY.md§2.4 andexamples/integrations/executor_bridge_adapter_min.ainl. - Phase 4 (this document, §6–§7) — multi-backend fan-out, Flask-shaped routing sketch, capacity guidance, and explicit pointer to
runtime/engine.pyfor limit fields; no new language or runtime behavior.
None of these steps require changing the default core-only capability profile for existing programs.
9. Related links
- Production-style layout: App-local trees typically pair an
ExecutorBridgeAdapter(or equivalent) with a small HTTP gateway, gateway-adjacentreq_*/ main.ainlgraphs that composemodules/common/executor_request_builder.ainlormodules/common/executor_bridge_request.ainl, and.txtprompts beside the gateway. Optional reusable LLMincludeinmodules/llm/. Naming and boundaries:docs/language/AINL_CORE_AND_MODULES.md§8. - Schema & validation:
schemas/executor_bridge_request.schema.json,schemas/executor_bridge_validate.py - Graph execution note: dotted
R core.*steps pass the first operand through the frame resolver fortarget(socore.PARSEcan consume a variable holding JSON text under graph-preferred execution); seeruntime/engine.py(_exec_r_call). - MCP server:
scripts/ainl_mcp_server.py,tooling/mcp_exposure_profiles.json - OpenClaw quickstart:
AI_AGENT_QUICKSTART_OPENCLAW.md - External orchestration (host → AINL):
docs/operations/EXTERNAL_ORCHESTRATION_GUIDE.md - Adapter catalog:
docs/reference/ADAPTER_REGISTRY.md,tooling/adapter_manifest.json - Integration narrative:
docs/INTEGRATION_STORY.md
