Zero-dependency programming

Replace imports
with specifications.

A local LLM generates self-contained Python from YAML specs. Verified against examples. Cached forever. Zero imports.

$ pip install conjure-llm

How it works
Spec in, verified code out
Write a YAML spec with a function signature and examples. Conjure generates, verifies, and caches the implementation.
levenshtein.yaml spec
name: levenshtein
version: "1.0"
function_name: levenshtein
params:
  s1: str
  s2: str
return_type: int
examples:
  - input: {s1: "kitten", s2: "sitting"}
    output: 3
app.py usage
import conjure

# First call: generate, verify, cache
result = conjure.invoke(
    "levenshtein",
    s1="kitten",
    s2="sitting"
)
# Returns: 3

# Next call: 0.3ms cache hit

Why
No dependencies, no trust chain

Zero imports

Generated code uses only Python builtins. No import statements, no eval, no exec. Enforced by AST analysis before caching.

Verified

Every example in the spec must pass in a sandbox. Fails? Error feedback goes back to the model. Three attempts before rejection.

Cached

Content-addressed via SHA-256. Same spec, same result. Sub-millisecond after the first call. 170,000x faster than generation.


ConjureEval-100
Benchmarked across 20 categories
70%
pass@1
88%
pass@3
5 GB
model size
0.3ms
cache hit
Application Dependencies With Conjure Reduction
Flask blog 13 transitive 0 15x
FastAPI service 15 transitive 0 17x
CLI data tool 5 transitive 0 6x
Web scraper 17 transitive 0 20x
File sync utility 8 transitive 0 9x
Average: 13x LOC reduction

Runtime
Runs entirely on your machine
No API keys. No cloud. No network requests. The model runs on-device via MLX on Apple Silicon.
Model
Qwen3.5-9B
OptiQ 4-bit quantization
Memory
5 GB
Mixed-precision weights
Cache speed
0.3 ms
170,000x vs generation