How It Works
A .md file goes in. Working automation comes out. Here's what happens in between.
Anatomy of a .md file
There are three parts. Only the shebang and the prompt are required.
#!/usr/bin/env think ← shebang
---
model: claude-haiku-4-5-20251001 ← frontmatter (optional)
max_tokens: 4096
---
Your natural language prompt ← prompt
goes here. As long or short
as you need.
#!/usr/bin/env think tells your shell to run the file through the think interpreter.
Optional YAML between --- fences. Set the model, max tokens, and other config here.
Everything after the frontmatter. Plain English instructions for what you want done.
The agent loop
When you run a .md file, the interpreter kicks off a loop. The LLM reads your prompt, decides which tools to call, and keeps going until the job is done.
Script → frontmatter + prompt
Env vars → frontmatter → config → defaults
Prompt → LLM with tool descriptions
LLM calls tools, you approve, results loop back
LLM stops calling tools, script exits
The tools
The LLM has five tools. That's it. Small, focused, composable.
write_stdout
Write text to stdout. The only way to produce output.
No approvalrun_command
Execute a shell command.
Requires approvalread_stdin
Read piped input data.
No approvalread_env
Read an environment variable.
Requires approvalset_argument
Register a named value for pattern matching.
Requires approvalThe approval system
Every sensitive action shows you exactly what's happening and asks before proceeding.
● run_command(weather.md)
$ curl https://api.weather.gov/points/37.7749,-122.4194
❯ Yes
Always
No
Pattern-based approval
This is where it gets interesting. When a command contains a named argument, you get a fourth option.
● run_command(weather.md)
$ curl https://api.weather.gov/points/37.7749,-122.4194
❯ Yes
Always
Always similar (curl {Location}) ← 🤯
No
Approve the pattern once, and future runs with different locations auto-match. Pretty cool right?
stdout is sacred
This is the key design principle. Only write_stdout writes to stdout. Everything else (debug output, approval prompts, status) goes to stderr. This makes scripts composable.
$ cat data.csv | think process.md | think format.md > output.json
This means you can compose scripts like any other Unix tool.
$ cat raw-data.csv \
| think clean.md \
| think analyze.md \
| think format-report.md > report.md
Install them all and it gets even cleaner.
$ cat raw-data.csv | clean | analyze | format-report > report.md
Pipes, redirects, shebangs — it all works. .md files are first-class citizens in your pipeline.
Configuration
Config is resolved in order of precedence. Higher wins.
THINKINGSCRIPT__MODEL, THINKINGSCRIPT__MAX_TOKENS, etc.
~/.thinkingscript/config.yaml
You'll need an Anthropic API key set as ANTHROPIC_API_KEY in your environment.
Two binaries
think
The interpreter. Runs .md scripts.
thought
Management tool. Build, install, cache, and inspect scripts.
Install scripts to your PATH
Use thought install to copy a built script to ~/.thinkingscript/thoughts/. Add that directory to your PATH and you can run any installed script by name.
$ thought install weather.md
$ weather "San Francisco"
San Francisco, CA
62°F, partly cloudy. Wind: 12 mph W. Humidity: 68%.