From conversational AI to networked intelligence
Large language models are incredible at conversation, but building reliable, multi-step AI systems with them often feels like herding cats: endless prompt loops, flaky state management, and zero guarantees of repeatability.
Enter AINL (AI Native Language) — a compact, graph-canonical DSL that compiles AI workflows into a deterministic intermediate representation (IR). No more “vibe-based” agents. You get strict validation, state/memory/tool adapters, compile-once-run-many execution, and seamless emission to multiple backends.
Now pair it with Hyperspace — the peer-to-peer network described in the recent Proof of Intelligence (PoI) paper. Here, “mining” isn’t wasteful hashing or staking capital. It’s running real AI experiments inside a sandboxed Agent Virtual Machine (AVM), proving execution with zkWASM (Groth16 on BN254), committing results to a shared Research DAG, and letting collective intelligence emerge as a network property.
The beautiful part? AINL integrates with Hyperspace at a deep, architectural level — not as an afterthought, but as the natural high-level programming layer the PoI ecosystem was waiting for.
What is AINL?
AINL turns vague LLM prompts into structured, auditable workflows:
- Graph-first design: Every
.ainlfile compiles to a canonical IR of nodes and edges. - Deterministic runtime: Executes the IR with adapters for memory, tools, HTTP effects, and more — no repeated LLM calls for control flow.
- Strict validation: Enforces single-exit discipline, reachability, adapter contracts, and capability grants.
- Multi-target emission: Generate code for LangGraph, Temporal, FastAPI, React, Prisma… and, crucially, Hyperspace.
- Sandbox ready: First-class support for sandboxes like the Hyperspace AVM.
Recent updates (as of March 2026) ship a Hyperspace-oriented emitter (standalone Python agent with embedded IR), trajectory JSONL logging, and local vector_memory / tool_registry adapters — see the implementation links below.
Where this shows up in the AINL repo
These are first-class paths in sbhooley/ainativelang, not hand-wavy claims:
- Hyperspace agent emitter:
compiler_v2.py(AICodeCompiler.emit_hyperspace_agent— embeds compiled IR, wiresRuntimeEngine, registers adapters). - Demo workflow:
examples/hyperspace_demo.ainl. - Adapters:
adapters/vector_memory.py,adapters/tool_registry.py. - Trajectory behavior:
docs/trajectory.mdanddocs/emitters/README.md. - AVM sandbox config helper (not WASM emit):
ainl generate-sandbox-config <file.ainl> --target avm— documented in the repo README anddocs/getting_started/README.md; IR can carryexecution_requirements/avm_policy_fragmentfor policy handoff (docs/RELEASE_NOTES.md).
Installation is straightforward:
git clone https://github.com/sbhooley/ainativelang.git
# Bootstrap Python 3.10 venv + install
ainl-validate examples/hello.ainl --strict
Hyperspace & Proof of Intelligence: a quick primer
The bullets below are the PoI paper’s model of the network, not something AINL’s repository implements end-to-end. Treat them as the target architecture; AINL today focuses on deterministic graphs, emit, and local adapters.
In the PoI model, nodes participate in a continuous experiment loop every 10-minute epoch:
- Execute an AI task (training step, inference batch, research trial) inside the AVM.
- Prove it with zkWASM — producing a tiny, verifiable Groth16 proof.
- Share the result as a content-addressed node in the Research DAG via GossipSub/libp2p.
- Adopt successful experiments from the DAG, compounding network intelligence.
The AVM runs WASM-compiled code deterministically. Proofs commit on-chain via the AGENTCOMMIT opcode. Rewards flow to nodes based on contribution quality and adoption rate — not just raw compute.
Security comes from the Intelligence Opportunity Cost Bound: attacking the network costs more in lost AI progress than any potential gain.
How AINL fits into the Hyperspace stack
AINL’s graph IR maps cleanly onto what you need for PoI-style provenance (deterministic execution traces, strict validation, packaged agents). The zkWASM / on-chain proof / DAG gossip path is defined by Hyperspace and the PoI paper; the open-source AINL repo today ships the Python Hyperspace emitter (embedded IR + RuntimeEngine) and optional hyperspace_sdk import scaffolding — see docs/RELEASE_NOTES.md for the exact status.
- AVM alignment — Use
ainl generate-sandbox-config <file.ainl> --target avmfor sandbox config fragments and IR metadata (execution_requirements,avm_policy_fragment). That is policy/config handoff, not “AINL compiles every workflow to WASM” in the current emitter. - Hyperspace agent emit (today) —
--emit hyperspaceproduces a single-file Python module with base64-embedded IR,RuntimeEngine, and thevector_memory/tool_registryadapters — the same code path asemit_hyperspace_agentin the compiler. - Research DAG (conceptual) — A validated AINL program plus trajectory logs (
AINL_LOG_TRAJECTORY,docs/trajectory.md) is the kind of artifact you would commit to a DAG for provenance; wiring that to the live network is outside this repo. - Experiment loop — Compile once, then deterministic runtime execution with trajectory logging; the prove → share → adopt loop is the PoI network design.
- P2P agent economy — The emitter gives you a deployable agent module; OpenClaw skills and economic rewards are ecosystem layers on top of Hyperspace, not fully specified in the AINL tree alone.
In practice:
ainl-validate examples/hyperspace_demo.ainl \
--strict \
--emit hyperspace \
-o demo_agent.py
From the repo root (so runtime/ and adapters/ resolve), run python3 demo_agent.py. That exercises the AINL runtime path the emitter is built for; connecting it to a Hyperspace node with zk proving is the next integration step when the official SDK/runtime is available.
Hyperspace ↔ AINL at a glance
| Hyperspace component | AINL integration level | Benefit |
| --- | --- | --- |
| AVM + zkWASM | generate-sandbox-config --target avm + IR policy; Hyperspace emit is Python+IR today | Deterministic execution today; zk path on Hyperspace side |
| Research DAG | Graph IR + trajectory JSONL as provenance inputs | Auditable runs; DAG wiring is network-side |
| Experiment loop | Compile-once runtime + adapters | Efficient local/agent loops; epoch timing is PoI-network |
| Agent economy / OpenClaw | --emit hyperspace + skill packaging story | Reusable modules; rewards via Hyperspace |
| Security & auditability | Strict validation + capability model | Aligns with policy you attach to sandboxes |
End-to-end zk-proof generation from every AINL compile is still an integration milestone; the emitter and IR are the parts that exist in-tree today.
Example: hyperspace_demo.ainl
The repo ships a strict-safe demo that exercises vector memory and tool registry adapters:
# examples/hyperspace_demo.ainl
# Demo: common modules + local vector_memory / tool_registry (strict-safe dotted verbs).
#
# ainl-validate examples/hyperspace_demo.ainl --strict
# ainl run examples/hyperspace_demo.ainl --enable-adapter vector_memory --enable-adapter tool_registry --log-trajectory --json
# ainl-validate examples/hyperspace_demo.ainl --strict --emit hyperspace -o demo_agent.py
include "modules/common/guard.ainl" as guard
include "modules/common/session_budget.ainl" as sb
L1:
Set guard_skip false
Set guard_tokens_used 0
Set guard_max_tokens 500
Set guard_elapsed_sec 0
Set guard_max_duration_sec 3600
Set guard_condition_ok true
Set guard_audit false
Call guard/ENTRY ->g_out
Set budget_tokens_start 0
Set budget_tokens_delta 1
Set budget_max_tokens 10000
Set budget_time_start 0
Set budget_time_delta_sec 0
Set budget_max_duration_sec 86400
Set budget_log_memory false
Call sb/ENTRY ->b_out
R vector_memory.UPSERT "demo" "note" "n1" "hyperspace demo vector text" "{}" ->vm_u
R vector_memory.SEARCH "vector" 3 ->vm_hits
R tool_registry.REGISTER "demo_tool" "Hyperspace demo capability" "{}" ->tr_reg
R tool_registry.LIST "." ->tr_list
R tool_registry.DISCOVER "." ->tr_disc
R core.STRINGIFY vm_hits ->s_hits
R core.STRINGIFY tr_list ->s_list
R core.STRINGIFY tr_disc ->s_disc
R core.concat g_out " | " ->p1
R core.concat p1 b_out ->p2
R core.concat p2 " | " ->p3
R core.concat p3 s_hits ->p4
R core.concat p4 " | " ->p5
R core.concat p5 s_list ->p6
R core.concat p6 " | " ->p7
R core.concat p7 s_disc ->out
J out
(Hero art: pair an AINL graph visualization with a Hyperspace Research DAG graphic when assets are ready.)
Why this combination matters
Most decentralized AI projects focus on either raw model serving or token incentives. Hyperspace + AINL does something rarer: it gives developers a productive, high-level language to author the very experiments that secure and grow the network.
- You write clean, validated workflows in AINL.
- You emit agents and trajectories that Hyperspace-class runtimes can treat as first-class experiment artifacts (full on-chain proving follows the PoI stack).
- The network design rewards useful intelligence — not just uptime or stake.
This is how we move from isolated “smart chatbots” to a living, evolving intelligence economy.
Try it today
- Clone AINL on GitHub and explore
examples/hyperspace_demo.ainl. - Read the full PoI paper: Proof of Intelligence (PDF).
- Spin up a local Hyperspace node and emit your first AINL workflow.
- Join the conversation — follow @sbhooley and @AINativeLang; the AINL repo is actively evolving (latest release 1.2.8 as of March 2026).
The future of AI isn’t just bigger models. It’s structured, verifiable, and collectively owned.
What workflow will you build first on this stack?
