Competitive & comparisons
One line: If you are weighing AINL against LangGraph, Temporal, CrewAI, or prompt-loop orchestration, start here — we separate reproducible benchmark methodology from hype, keep .ainl as the authoring source of truth, an
Competitive & comparisons
One line: If you are weighing AINL against LangGraph, Temporal, CrewAI, or prompt-loop orchestration, start here — we separate reproducible benchmark methodology from hype, keep .ainl as the authoring source of truth, and show how multi-target emit (including hybrid wrappers) fits into token and runtime economics.
Current PyPI release: ainativelang v1.3.3 (guides below reference features introduced in v1.2.5+).
Comparative framing and AINL vs X materials, grounded in shipped compiler/runtime/emitters and reproducible benchmarks.
Guides
- From LangGraph to AINL in 15 minutes
- AINL + Temporal: best of both worlds
- OpenClaw production savings (worksheet) — fill with anonymized numbers only.
- Versus LangGraph / Temporal: benchmark methodology
- Comparison tables — benchmark-backed cells from committed
tooling/benchmark_*.jsonandBENCHMARK.md, with TBD rows where data is not in-repo yet.
Maintainer note
Folder README.md is the same index for GitHub navigation; this OVERVIEW.md file is synced to ainativelang.com /docs (the sync script skips README.md).
