One rule:
@= AI executes. No@= your code runs.
result = @ask("summarize: {text}") # AI executes
words = len(text.split()) # deterministic Python
flag = @judge("is this complete: {result}") # AI returns bool
When you want an AI to execute a product's logic, you have two bad options:
- Natural language — flexible, but takes paragraphs to describe, and AI fills gaps however it wants
- Pseudocode — tighter, but can't express the full execution logic: no retries, no branching on AI judgment, no parallel calls, no fallbacks
The result: you spend more time writing prompts than building, and the AI still misses the intent.
AIL is a new language designed for this exact problem. One rule, one symbol:
@ → AI executes (semantic, non-deterministic)
→ code runs (deterministic, Python-compatible)
Write your logic the way you think it. @ask, @judge, @validate — the full execution intent in a fraction of the words. AIL comes with a guide written for AI, so you can paste it into any model and it immediately understands how to read and generate AIL.
kzl is the Python framework that runs AIL. Plug in your own agent, decorate your tools, and your agent executes the workflow. Pure Python, no new concepts — if you know Python, you're ready in minutes.
Everything else is standard Python syntax — loops, functions, types, error handling. No new paradigm to learn.
Retry until quality passes:
retry max=3:
report = @ask(generate_report)
@validate(report, "must include conclusion, data, and sources")
Loop until AI says done:
loop max=10 until @judge("no unresolved issues in: {output}"):
output = @ask(solve_issues)
Parallel AI calls:
tech, biz, ux = parallel:
tech = @ask("analyze technically: {content}")
biz = @ask("analyze commercially: {content}")
ux = @ask("analyze from user perspective: {content}")
summary = @ask("synthesize three perspectives: {tech} {biz} {ux}")
Structured extraction:
type QueryInfo:
intent: str
keywords: list[str]
multi_step: bool
info = @extract(analysis, type=QueryInfo)
Multi-turn conversation:
with context(system="you are a planning expert") as ctx:
issues = @ask(analyze) # AI sees prior turns
output = @ask(solve) # AI sees issues
ctx.reset() # clear history
| Operation | Returns | What it does |
|---|---|---|
@ask(prompt) |
str |
execute a task |
@judge("condition") |
bool |
yes/no judgment |
@pick("instruction", options=[...]) |
option type | select from options |
@plan("goal") |
list[str] |
decompose goal into steps |
@extract(text, type=T) |
T |
extract structured data |
@eval(content, "criterion") |
float |
score 0–1 |
@validate(content, "condition") |
— / raises | assert or retry |
@act("instruction") |
str |
AI autonomously picks and calls a tool |
@ask_user("prompt") |
str |
ask the human, block for input |
@confirm("description") |
— / raises | request human confirmation |
@show("message") |
— | display to human, non-blocking |
try:
retry max=3:
timeout 20s:
result = @ask(generate)
@validate(result, "must be complete")
fallback:
result = @ask(basic_fallback)
retry, timeout, try/fallback compose freely.
use tool vector_search(query: str, top_k: int) -> list[Document] # deterministic function
use skill rag_agent(query: str) -> (str, list[str]) # sub-agent with AI ops
use plugin database as db # stateful external service
docs = vector_search("deep learning", top_k=10) # called like a function
answer, citations = rag_agent(query=user_input)
users = db.query("SELECT * FROM users")
memory.save(result, key="last_answer", tags=["history"])
pref = memory.get("user_preference")
related = memory.search("RAG discussion", top_k=3)
A complete RAG agent is in examples/ail/rag_agent.ail. More examples in examples/.
| Language spec (for AI) | docs/ail/for-AI-v1.0.md |
| User guide — English | docs/ail/for-humans-en-v1.0.md |
| User guide — 中文 | docs/ail/for-humans-v1.0.md |
| Python SDK spec | docs/kzl/product-spec-v1.0.md |
| Python SDK | kzl/ |
| AIL examples | examples/ail/ |
| kzl examples | examples/kzl/ |
MIT