How I Built a Production X/Twitter Bot in 100 Lines of AINL (and Saved 5–90× on Costs)
The headline is about surface area: one strict, auditable AINL graph (plus small shared modules/ includes) versus hundreds of lines of imperative glue. The shipped ainl-x-promoter.ainl stays compact next to a typical Tweepy + LangChain/LangGraph loop—especially once you count retries, dedupe, cursor safety, and policy as real code.
Intro
I wanted a smart X (Twitter) promoter that searches, classifies, replies intelligently, respects rate limits, and never duplicates posts—without burning thousands of LLM tokens or writing fragile Python loops. AINL plus apollo-x-bot turned out to be the right split: compile-once, run-many orchestration in the graph; OAuth, HTTP, SQLite, and LLM I/O in a small gateway_server.py bridge.
New to the stack? Skim What is AINL?, Why AINL, and Easy install — OpenClaw or ZeroClaw before you wire cron.
What is AINL?
AINL (AI Native Lang) is a compact, graph-canonical DSL that turns AI from open-ended chat into a deterministic, structured worker. You (or an LLM) describe a workflow once; the compiler validates it into a clean graph IR; the runtime executes it deterministically—without re-invoking a model to “re-plan” every poll.
Highlights:
- Strict mode — catches many errors at compile time
- Bridge / adapter system — keeps orchestration separate from I/O, secrets, LLMs, DBs
- Modular includes — reusable patterns across programs
- Visualization — e.g. Mermaid from
ainl visualize - Multi-target emission — for bots you mostly run the graph as-is
The project is open-core Apache 2.0 at github.com/sbhooley/ainativelang. It ships CLI (ainl run, ainl-validate, ainl visualize), an MCP server for agents, and skills for OpenClaw and ZeroClaw.
Deeper dives: Your first AINL workflow · Install & run AINL · AINL + Cursor / Claude Code (MCP)
The apollo-x-bot in action
apollo-x-bot is a production-style X/Twitter awareness + promotion bot implemented as a single strict AINL graph: apollo-x-bot/ainl-x-promoter.ainl.
It covers:
- Incremental search —
x.searchwithsince_idcursor in SQLite - LLM classification — plus fast heuristic / keyword paths when you want cheaper runs
- Gating — score floor, user cooldowns, daily reply cap, dedupe by tweet id
- Smart replies —
promoter.process_tweet(draft + post) with dry-run support - Optional daily original post —
promoter.maybe_daily_post - Audit trail —
record_decisioninto the memory adapter; crash-safe cursor commit after the loop
Heavy lifting (X OAuth 1.0a, LLM HTTP, SQLite) lives in the gateway. The AINL program declares dataflow, loops, retries, and policy—readable and reviewable. See the apollo-x-bot README and executor bridge docs for how R bridge.POST maps to HTTP.
X (Twitter) developer app and API keys
The gateway reads five X-related secrets plus your LLM key. All of them come from the X Developer Portal (except the LLM key). Portal labels change over time; this is the mapping apollo-x-bot expects—also documented in the environment table in the repo README.
1. Open the developer portal and create an app
- Sign in at the X Developer Portal (older bookmarks may use developer.twitter.com—both should reach the same dashboard).
- If prompted, apply for developer access and accept the developer terms. Access levels and pricing for search and posting change over time—confirm your account can use v2 Recent search and tweet creation under your current plan; see X API documentation for the latest.
- Under Projects & Apps, create a Project, then an App inside it. You can reuse one app for both read (search) and write (replies) if permissions allow.
2. App permissions (read + post)
apollo-x-bot searches recent tweets (Bearer / v2) and posts replies (OAuth 1.0a in user context). For live replies—not dry-run—you typically need:
- App permission level Read and write (or the current portal equivalent that allows creating tweets / replies).
- OAuth 1.0a enabled for the app where the portal asks for user authentication settings.
If you raise permissions from “Read” to “Read and write”, regenerate the user Access Token and Secret afterward—old tokens often keep the previous scope until replaced.
3. Copy credentials into env vars
In the portal, open your app → Keys and tokens (or equivalent). Copy values into apollo-x-bot/.env or your apollo-x-promoter.env file—never commit these to git.
| What you see in the portal | Set this env var | Role in apollo-x-bot |
|----------------------------|------------------|----------------------|
| Bearer Token | X_BEARER_TOKEN | Recent search (Twitter API v2 GET); required for live search. |
| API Key (Consumer Key) | X_API_KEY | OAuth 1.0a consumer key (alias: X_CONSUMER_KEY). |
| API Key Secret (Consumer Secret) | X_API_SECRET | OAuth 1.0a consumer secret (alias: X_CONSUMER_SECRET). |
| Access Token | X_ACCESS_TOKEN | OAuth 1.0a user access token for the account that will post. |
| Access Token Secret | X_ACCESS_TOKEN_SECRET | Matches the access token; required with the above for posting. Bearer alone is usually not enough to create tweets—X returns 403 if you skip user OAuth for writes. |
The repo README spells out the same naming: “API Key under Consumer Keys → X_API_KEY; API Key Secret → X_API_SECRET.”
4. LLM key (not from X)
- Set
OPENAI_API_KEY(orLLM_API_KEY) for classification and reply drafting. Optional:OPENAI_BASE_URL,LLM_MODEL(defaults are OpenAI-compatible—see README).
5. Quick verification checklist
- [ ] Bearer present → search path can call X v2 recent search (subject to your plan limits).
- [ ] Consumer key + secret + access token + secret present → gateway can sign OAuth 1.0a requests to post replies.
- [ ] Tokens created under Read and write if you turned off
PROMOTER_DRY_RUNfor real posts. - [ ]
PROMOTER_DRY_RUN=1for first runs—no live X writes; safe way to validate the graph and gateway.
If something still fails, enable PROMOTER_GATEWAY_DEBUG=1 and check the gateway’s startup: lines for bearer_set and env loading (see README Troubleshooting).
Setup in five minutes (step-by-step)
These patterns match the repo’s apollo-x-bot/README.md, OPENCLAW_DEPLOY.md, and root README.
1. Core AINL install (one-time)
git clone https://github.com/sbhooley/ainativelang.git
cd ainativelang
PYTHON_BIN=python3.10 VENV_DIR=.venv-py310 bash scripts/bootstrap.sh
source .venv-py310/bin/activate
pip install -e ".[dev,web,mcp]"
2. Standalone quick test (gateway + one poll)
Use the X and LLM keys from X developer app and API keys above.
From the repo root:
cd apollo-x-bot
# Create .env (or export the same vars in your shell)
cat > .env <<'EOF'
X_BEARER_TOKEN=...
X_API_KEY=...
X_API_SECRET=...
X_ACCESS_TOKEN=...
X_ACCESS_TOKEN_SECRET=...
OPENAI_API_KEY=...
PROMOTER_STATE_PATH=./data/promoter_state.sqlite
AINL_MEMORY_DB=./data/promoter_memory.sqlite
PROMOTER_DRY_RUN=1
PROMOTER_MAX_REPLIES_PER_DAY=5
PROMOTER_CLASSIFY_MIN_SCORE=5
EOF
Easiest path: run-with-gateway.sh starts the gateway, waits, then runs ainl run with the full --bridge-endpoint map and --http-timeout-s 120 (important for batched llm.classify):
cd .. # back to repo root
PROMOTER_DRY_RUN=1 bash apollo-x-bot/run-with-gateway.sh
To run the gateway yourself in one terminal and poll in another, see gateway_server.py (default http://127.0.0.1:17301) and mirror the --enable-adapter / --bridge-endpoint flags from openclaw-poll.sh—or invoke that script with PROMOTER_GATEWAY_URL pointing at your gateway.
Visualize the graph:
python3 -m cli.main visualize apollo-x-bot/ainl-x-promoter.ainl --output apollo-x-bot/promoter.mmd
3. OpenClaw (recommended for production scheduling)
OpenClaw gives you cron, workspace memory, channels, and the rest of the “full agent” stack. AINL is added via the OpenClaw install guide or ainl install-mcp --host openclaw — see also OpenClaw integration (docs).
Production shape (from OPENCLAW_DEPLOY.md):
- Supervise
gateway_server.py(systemd, Docker, launchd) withRestart=always. - Put secrets in
~/.openclaw/apollo-x-promoter.env(or setAPOLLO_PROMOTER_ENV). - Schedule polls with OpenClaw cron using
apollo-x-bot/openclaw-poll.sh(gateway must already be up).
Example cron add (adjust --session-key and paths to match your install):
export AINL_WORKSPACE=/path/to/ainativelang
export APOLLO_PROMOTER_ENV="$HOME/.openclaw/apollo-x-promoter.env"
openclaw cron add \
--name apollo-x-promoter-poll \
--cron "*/45 * * * *" \
--session-key "agent:default:ainl-advocate" \
--message 'cd $AINL_WORKSPACE && APOLLO_PROMOTER_ENV='"$HOME"'/.openclaw/apollo-x-promoter.env bash apollo-x-bot/openclaw-poll.sh'
Align */45 * * * * with the graph’s S core cron in ainl-x-promoter.ainl, or change both together.
4. ZeroClaw (lightweight alternative)
ZeroClaw is a Rust-native host with a small footprint—a strong fit for always-on bots on a cheap VPS or Pi. Install the skill with zeroclaw skills install … or ainl install-mcp --host zeroclaw — ZeroClaw setup guide · ZeroClaw integration (docs) · MCP host integrations.
Use the same openclaw-poll.sh pattern (or your host’s JSON CLI wrapper) once PYTHON, PROMOTER_GATEWAY_URL, and env files point at the toolchain and gateway.
5. Hermes-style agents (Nous Research ecosystem)
Hermes-class agents (self-improving workflows, skill libraries) don’t ship a first-party AINL skill yet, but integration is straightforward:
- Shell out to the same
python -m cli.main run …+ supervised gateway, or - Expose AINL via MCP (
ainl_validate,ainl_compile,ainl_run) for tool-calling agents.
Pair agent memory with AINL’s memory adapter and record_decision audit rows for traceability. See Nous Research for Hermes-related projects and announcements.
How to use and customize the bot
- Tune with env —
PROMOTER_CLASSIFY_MIN_SCORE, caps, cooldowns, dry-run (no.ainledits for many policy tweaks). - Inspect decisions — SQLite / memory rows with kind
promoter.decision(see README). - Go live — set
PROMOTER_DRY_RUN=0only after you trust classify + gate behavior. - Extend — add
includemodules (e.g. sharedretry.ainl) or gateway routes; keep secrets out of the graph. - Heuristic fallback — reduce or eliminate LLM spend on easy passes.
Production deployments (tested patterns)
| Host | Role | |------|------| | OpenClaw | Cron + workspace; full agent UX | | ZeroClaw | Lean always-on scheduling + MCP | | Hermes-style agents | Tool/MCP or shell integration |
Real-world savings
Typical imperative stack (Tweepy + LangChain/LangGraph + cron):
- LLM re-plans or re-explains work every poll → token burn
- Manual state, dedupe, retries, cursor rules → brittle code
- Fat Python process per run → memory and ops overhead
AINL + apollo-x-bot
- Compile-once, run-many — the model classifies and drafts; the graph runs the loop. Repo Benchmarks discusses ~2–5× recurring-token context and 3–5× vs LLM-generated Python/TS on representative lanes (see methodology and caveats on that page).
- Incremental search + deferred cursor commit + dedupe → fewer X reads and fewer wasted classifications.
- Heuristic paths → zero LLM cost when you want them.
- ZeroClaw deployments → much smaller idle RAM/CPU than a full Node + Python agent stack for the host side.
- Maintenance — one graph + Mermaid vs pages of imperative control flow.
- Safety — cursor promotion after the tweet loop completes (see README).
Official numbers and honesty: always read Benchmarks and BENCHMARK.md before quoting multipliers in production decks.
Conclusion and next steps
- Clone ainativelang.
- Create an X app and copy Bearer + OAuth 1.0a keys into
.env(see X developer app and API keys). - Run
PROMOTER_DRY_RUN=1 bash apollo-x-bot/run-with-gateway.shuntil the graph behaves. - Supervise the gateway, then wire OpenClaw or ZeroClaw cron with
openclaw-poll.sh. - Star the repo if this saved you a weekend of LangChain duct tape.
Result: a production social-media bot that costs pennies per day instead of dollars, runs reliably on a Raspberry Pi, and is auditable by non-coders.
Related on this site: Benchmarks · Install chooser · Runtime · Product
