How to Connect AINL to Claude (Anthropic API)
AINL programs call external APIs through the http adapter. Claude is just an HTTP API — so connecting AINL to Claude means writing an AINL workflow that:
- Builds a request payload (headers, body)
- Calls
http.Postagainst Anthropic's API endpoint - Branches or returns based on the response
This guide shows you exactly how to do that, using real adapter syntax and real Anthropic API shapes.
Prerequisites
- AINL installed and working (
ainl-validate --versionpasses) - An Anthropic API key from console.anthropic.com
- The
httpadapter enabled at runtime (it is disabled by default for security; you must pass--enable-adapter httptoainl run)
How the http adapter works
The http adapter is part of AINL's canonical core. Its Post verb signature is:
R http.Post <url> <body_var> [headers_var] [timeout_s] ->resp
url— string URL (quoted literal or frame variable)body_var— a frame variable holding a JSON-serializable bodyheaders_var— optional frame variable holding a string-to-string dict of HTTP headersresp— frame variable that receives the normalized response envelope
The response envelope has these fields:
| Field | Type | Meaning |
|---|---|---|
| ok | bool | true if the HTTP call completed with a 2xx status |
| status_code | int or null | HTTP status code returned |
| error | str or null | Transport error message, if any |
| body | any | Decoded response body (JSON or string) |
| url | str | URL that was called |
Security note: The
httpadapter requires an explicit allowlist. At runtime, pass--enable-adapter http --http-allow-host api.anthropic.com. Never widen the host allowlist beyond what your workflow actually needs.
The workflow
Here is a complete AINL program that calls Claude's Messages API:
S core web /api
E /ask P ->L_ask ->resp
L_ask:
Set headers {"x-api-key": "YOUR_ANTHROPIC_API_KEY", "anthropic-version": "2023-06-01", "content-type": "application/json"}
Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
J resp
Breaking it down:
S core web /api— declare a web service, path prefix/apiE /ask P ->L_ask ->resp— POST endpoint/api/ask, handled by labelL_ask, returns variablerespSet headers {...}— build the required Anthropic headers as a frame dictSet body {...}— build the messages payloadR http.Post ... ->resp— POST to Claude, store the full response envelope inrespJ resp— returnrespas JSON
Keep the API key out of source
Hard-coding credentials in AINL source is fine for local experiments, but don't commit them. The better pattern is to pass the key in at runtime via the initial frame:
ainl run ask.ainl --json \
--enable-adapter http \
--http-allow-host api.anthropic.com \
--http-timeout-s 30 \
--frame '{"ANTHROPIC_KEY": "sk-ant-..."}'
Then reference it in the workflow as a variable:
S core web /api
E /ask P ->L_ask ->resp
L_ask:
Set headers {"x-api-key": ANTHROPIC_KEY, "anthropic-version": "2023-06-01", "content-type": "application/json"}
Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
J resp
In strict mode, bare identifiers in read positions are treated as variable references — so
ANTHROPIC_KEY(no quotes) is read from the frame, not treated as a literal string.
Branching on the response
Because AINL gives you explicit control flow, you can branch on whether Claude's response succeeded:
L_ask:
Set headers {"x-api-key": ANTHROPIC_KEY, "anthropic-version": "2023-06-01", "content-type": "application/json"}
Set body {"model": "claude-opus-4-5", "max_tokens": 1024, "messages": [{"role": "user", "content": "What is AINL?"}]}
R http.Post "https://api.anthropic.com/v1/messages" body headers ->resp
If resp.ok ->L_success ->L_error
L_success:
J resp.body
L_error:
Set out {"error": "claude_call_failed", "status": resp.status_code}
J out
This is the key advantage of AINL over prompt-loop agents: the branching logic is compiled into the graph before execution — the model doesn't decide whether to check for errors, the workflow enforces it.
Validate and run
# Validate
ainl-validate ask.ainl --strict
# Run with http adapter + host allowlist + API key from env
ainl run ask.ainl --json \
--enable-adapter http \
--http-allow-host api.anthropic.com \
--http-timeout-s 30 \
--frame "{\"ANTHROPIC_KEY\": \"$ANTHROPIC_API_KEY\"}"
Expected successful output will be a JSON envelope with resp.body containing Claude's message response.
Using the runner service (HTTP API)
If you're running the AINL runner service (scripts/runtime_runner_service.py), submit the workflow via the /run endpoint:
uvicorn scripts.runtime_runner_service:app --port 8000
Then POST your workflow:
curl -X POST http://localhost:8000/run \
-H "Content-Type: application/json" \
-d '{
"code": "S core web /api\nE /ask P ->L_ask ->resp\n\nL_ask:\n Set headers {\"x-api-key\": \"sk-ant-...\", \"anthropic-version\": \"2023-06-01\", \"content-type\": \"application/json\"}\n Set body {\"model\": \"claude-opus-4-5\", \"max_tokens\": 1024, \"messages\": [{\"role\": \"user\", \"content\": \"What is AINL?\"}]}\n R http.Post \"https://api.anthropic.com/v1/messages\" body headers ->resp\n J resp",
"strict": true,
"allowed_adapters": ["core", "http"],
"adapters": {
"http": {
"allow_hosts": ["api.anthropic.com"],
"timeout_s": 30
}
}
}'
The runner enforces its own security floor — even if you pass a broader allowlist, the server-level grant caps what adapters can do.
What's next
- How to connect AINL to OpenAI — same pattern, different endpoint and headers
- How to use AINL with Cursor or Claude Code (MCP) — let your AI coding agent call AINL tools directly
- Your first AINL workflow — if you haven't done the basics yet
