AI Native Lang

Product

A graph-native substrate for AI workers.

AI Native Lang (AINL) is a compact workflow language that compiles into a canonical graph IR. A deterministic runtime executes that graph through adapters — HTTP, databases, queues, tools, and models — so your agents behave like real infrastructure, not opaque prompt loops.

Deterministic executionCompile once, run manyCapability-bound adaptersGraph introspectionOperator-ready security

Architecture

From AINL source to deterministic execution.

The repo's core pipeline — compiler_v2.py, runtime/engine.py, and adapters — is built around a single idea: compile workflows into a graph once, then run that graph cheaply as many times as you like.

01

AINL program

compact, line-oriented source

02

Compiler

parses + type- and capability-checks

03

Canonical graph IR

nodes + edges + adapters

04

Runtime engine

deterministic step execution

05

Adapters

HTTP, DB, queue, cache, tools, LLM

Example

A real AINL monitor program.

This is adapted directly from the AINL demo programs. It checks for new emails since the last run, updates state in a cache adapter, and can trigger downstream actions when a threshold is exceeded. Each line becomes one or more nodes in the compiled graph IR.

  • R cache → adapter reads/writes external state
  • X → computes values via pure operators
  • If → explicit control flow with labeled jumps
compact syntax (recommended)ainl
# Compact syntax (recommended) — 66% fewer tokens
email_monitor:
  in: inbox_name

  last_check = cache.GET state "last_email_check"
  emails = email.G inbox_name
  email_count = core.len emails

  if email_count > 5:
    out "notify"

  out "ok"
Show equivalent opcode syntax →
opcode syntax (advanced)ainl
# Opcode syntax (equivalent low-level format)
L1:
  R cache.GET state "last_email_check" ->last_check
  R email.G inbox ->emails
  Filt new_emails emails ts > last_check
  X email_count core.len new_emails
  X over_threshold core.gt email_count 5
  If over_threshold ->L7 ->L8
L7:
  J "notify"
L8:
  J "ok"

Components

What you actually get from the repo.

Compiler

Parses AINL source into a canonical graph IR, validating types and capabilities. Owns the language grammar, static analysis, and emission into runtime-ready artifacts and, over time, additional targets (e.g., TypeScript helpers or OpenAPI surfaces).

See compiler_v2.py in the GitHub repo.

Runtime

A step-by-step graph engine that executes nodes deterministically, manages variables, and coordinates adapters. It can run locally, inside services, or behind a /run API, and supports capability discovery via /capabilities.

See runtime/engine.py in the GitHub repo.

Adapters

Pluggable adapters expose effectful capabilities — HTTP, SQLite, filesystem, queues, cache, memory, email, and more — each with explicit contracts and privilege tiers. Operators choose which adapters are enabled per deployment.

Default profiles like local_minimal and sandbox_network_restricted are defined in tooling/security_profiles.json.

Why it matters

From clever demos to infrastructure-grade agents.

Compile once, run many

Complex AINL workflows compile once (tens of thousands of tokens) and then run without re-invoking the model unless you explicitly add LLM calls. In internal benchmarks, that translates to roughly 3–5× lower recurring token spend on non-trivial monitors and agents.

Deterministic, inspectable behavior

Because AINL is a graph, not a conversation log, you can diff, version, and reason about workflows like code. Execution traces map directly back to graph nodes instead of opaque prompt history.

Fits real stacks

Use AINL alongside existing agent frameworks and orchestration layers. Let your LLM choose or refine AINL programs, but keep the actual execution, side effects, and state transitions under a deterministic runtime with clear adapter contracts.

For a deeper dive into how this plays out in practice, see the Graph-Native Agents vs Prompt-Loop Agents and Apollo + AINL case studies.