AI Native Lang
security

Sandboxed AINL Runtimes: Profiles That Don’t Leak

Practical guidance for running AI Native Lang in no-network, controlled-egress, and operator-full modes without surprising your security team.

March 17, 2026·7 min read
#sandbox#runtime#security-profiles
Share:TwitterLinkedIn

TL;DR

If you give an AI runtime full network and filesystem access, it will eventually do something your security team hates.

AINL’s answer is sandboxed runtimes backed by named execution profiles:

  • local_minimal: no I/O, no network — perfect for local dev and dry-runs;
  • sandbox_compute_and_store: local compute + storage, no outbound network;
  • sandbox_network_restricted: controlled HTTP egress to an allowlisted set of hosts;
  • operator_full: everything enabled, but only in tightly controlled, operator-run environments.

In this post we’ll look at how these profiles work, how they map to real container and network settings, and how they keep AINL from becoming yet another “god process” in your stack.


Why sandboxed AI runtimes matter

As AI agents move into production, a lot of early systems make two risky choices:

  1. They run everything in a single, broadly privileged process.
  2. They trust prompts more than they trust isolation.

That leads to:

  • runtimes with unfettered network access;
  • agents that can accidentally or maliciously call dangerous tools;
  • difficulties proving that “this environment cannot exfiltrate X or delete Y”.

AINL’s philosophy is:

Treat the runtime as a workflow engine, not your primary security boundary. Lock it into restricted sandboxes and make its privileges explicit.

The sandbox profiles in tooling/security_profiles.json encode this for you.


Four profiles, four deployment stories

AINL ships with four main security profiles that double as sandbox recipes.

1. local_minimal — safe local dev

Intent: give developers and agents a way to validate graphs without any side effects.

Characteristics:

  • adapters: core only;
  • forbidden privilege tiers: local_state, network, operator_sensitive;
  • runtime limits: small, conservative caps on steps, depth, adapter calls, and time.

Container / host expectations:

  • can run as a simple process on a laptop;
  • no special network or filesystem privileges needed;
  • useful for:
    • graph introspection,
    • training data generation,
    • spec conformance tests.

2. sandbox_compute_and_store — local compute, no network

Intent: allow complex workflows with local state, but absolutely no outbound network.

Characteristics:

  • adapters: core, sqlite, fs, wasm, memory, cache;
  • forbidden adapters: http, agent, svc, email, calendar, social, db, etc.;
  • forbidden privilege tiers: network, operator_sensitive.

Container / host expectations:

  • block outbound network at the container or VM level:
    • no default route, or
    • strict firewall rules denying egress;
  • mount a dedicated sandbox root for filesystem access;
  • set CPU/memory/time limits per container.

This is ideal for:

  • air-gapped or offline environments;
  • local-only stateful workflows;
  • sanitizing and transforming data before it ever touches external systems.

3. sandbox_network_restricted — controlled egress

Intent: allow an AINL runtime to talk to a small set of approved services while keeping everything else dark.

Characteristics:

  • adapters: core, sqlite, fs, wasm, memory, cache, http, tools, queue;
  • forbidden adapters: high-risk or operator-sensitive surfaces (email, calendar, broad “db” surfaces, social APIs);
  • forbidden privilege tiers: operator_sensitive.

Container / host expectations:

  • enforce HTTP host allowlists at:
    • firewall / security group level,
    • or through egress proxies and DNS rules;
  • keep secrets scoped to the exact targets needed;
  • log outbound connections for audit.

Use this for:

  • calling a limited set of internal APIs;
  • work that needs some network I/O but should never talk to the public internet;
  • environments where you want strong guardrails but not complete isolation.

4. operator_full — trusted operator deployments

Intent: let experienced operators assemble powerful environments while keeping responsibility clear.

Characteristics:

  • adapters: operator-defined allowlist (everything is available in principle);
  • no default forbidden privilege tiers;
  • runtime limits are generous but still present.

Container / host expectations:

  • strong network segmentation and egress policies;
  • mature secrets management and rotation;
  • separate policy/approval engines on top of AINL (e.g. business logic gates).

This profile is not meant for:

  • unmanaged developer laptops;
  • raw multi-tenant SaaS;
  • “let’s see what happens” experiments.

Use it only where you already have serious controls in place.


From profiles to real sandboxes

The docs on SANDBOX_EXECUTION_PROFILE and RUNTIME_CONTAINER_GUIDE map these profiles into concrete environment guidance. In practice, this looks like:

  • Docker / container runtimes:

    • non-root user with minimal Linux capabilities;
    • read-only root filesystem where possible;
    • explicit volume mounts for sandboxed data;
    • network modes and firewall rules that match the chosen profile.
  • OS-level sandboxes:

    • sandbox-exec / seatbelt on macOS,
    • bubblewrap / nsjail on Linux,
    • or equivalent host-provided isolation.
  • Egress controls:

    • firewall rules with host allowlists;
    • proxy / gateway layers that see and log all traffic;
    • libraries like tethered-style socket guards for language-specific enforcement.

AINL doesn’t try to reinvent OS-level isolation — it assumes you have it and gives you a profile contract to wire into it.


Threat model: what sandboxing defends against

The SAFE_USE_AND_THREAT_MODEL docs outline several risk categories AINL is designed to mitigate when used with sandbox profiles:

  • accidental data exfiltration via misconfigured adapters;
  • prompt injection that tells an agent to call dangerous tools;
  • “confused deputy” issues where a low-trust request rides on a high-privilege runtime;
  • runaway workflows that eat CPU, memory, or time.

Sandboxed profiles address these by:

  • blocking whole classes of adapters in some environments;
  • forcing network egress decisions into infra, not the runtime code;
  • enforcing hard limits on steps, depth, adapter calls, and time;
  • making it cheap to spin up multiple specialized runtimes instead of one all-powerful one.

Observability and audit logging

Sandboxing is only half the story; you also need to see what’s happening.

AINL’s audit logging guidance recommends:

  • structured JSON logs for each run, including:
    • which profile and capability grant were in effect;
    • which adapters were invoked and with what high-level parameters;
    • whether any policy violations occurred;
  • shipping logs to your existing observability stack (e.g. ELK, Datadog, Grafana);
  • correlating AINL runs with upstream/downstream services via trace IDs.

This allows security and SRE teams to:

  • answer “what did this runtime do?” without reading raw prompts;
  • spot misuse or drift in how workflows are being used;
  • tighten profiles over time based on real-world usage.

When to use which profile

As a rule of thumb:

  • local_minimal: tests, training data generation, examples, education.
  • sandbox_compute_and_store: offline analytics, sensitive data processing, air-gapped workflows.
  • sandbox_network_restricted: production-ish systems that need to talk to a few well-known APIs.
  • operator_full: highly curated, operator-owned deployments behind strong infra.

You can also run multiple profiles side by side:

  • one runtime for “validate and compile only”;
  • another for “restricted egress read-only workflows”;
  • another for “operator-only, high-privilege jobs”.

The more you separate concerns, the less you rely on a single, overpowered runtime process.


Getting started with sandboxed AINL runtimes

To start using sandbox profiles with AINL:

  1. Pick an appropriate profile from tooling/security_profiles.json.
  2. Configure your runner:
    • AINL_SECURITY_PROFILE=<profile> for the HTTP runner, and/or
    • AINL_MCP_PROFILE=<profile> for the MCP server.
  3. Align your container / host configuration with the profile’s expectations:
    • network rules,
    • filesystem mounts,
    • resource limits.
  4. Turn on structured audit logging and send it to your observability stack.
  5. Iterate: tighten profiles and policies as you see how workflows behave.

Sandboxed runtimes won’t make sloppy code safe by themselves — but with AINL’s profiles and capability grants, they give you a clear, enforceable contract between AI workflows and the infrastructure they run on.

A

AI Native Lang Team

The team behind AI Native Lang — building deterministic AI workflow infrastructure.

Related Articles