Local, agentic Q&A for your computer. Ask questions about your files, tools, configs, and projects. Runs on Ollama -- no cloud, no API keys, no token costs.
Cloud LLM APIs charge per token. Asking "where are my configs?" shouldn't cost money. If you have a Mac with Apple Silicon (or a GPU), you already have the compute. ref uses it.
git clone https://github.com/jaimejim/ref.git
cd ref && chmod +x ref
ln -sf "$(pwd)/ref" /usr/local/bin/refRequires: Python 3.12+, uv, Ollama, ripgrep.
Pull a model, then run ref init:
| Model | Size | Quality | Notes |
|---|---|---|---|
qwen3.5:4b |
2.7GB | Very good | Best small model, fast, detailed answers |
qwen3:4b |
2.5GB | Good | Lightweight, concise |
granite4:3b |
2.1GB | Decent | Smallest and fastest, default choice |
glm-4.7-flash-64k |
19GB | Excellent | 64k context, most reliable |
gpt-oss |
13GB | Good | Solid reasoning |
Models without tool-calling support (won't work): gemma3, phi4.
ollama pull granite4:3bref init # first-run setup (re-run to scan for new paths)
ref "where are my shell configs?" # ask anything
ref -f ~/code/myproject/ "what does this do?" # target a path
ref -v "what shell plugins do I use?" # verbose: show tool calls
ref add ~/Documents/notes # add a source
ref remove ~/Documents/notes # remove a source
ref config # show setupcat ~/.zshrc | ref "what aliases do I have?"
pbpaste | ref "explain this code"
git diff | ref "summarize these changes"
curl -s https://example.com/api | ref "what endpoints are there?"-v shows what the model is doing -- which tools it calls and what it finds:
$ ref -v "where is my git config?"
→ _tool_search(query='git config')
← Found files: ... (3 lines)
→ _tool_read(path='/Users/you/.gitconfig')
← === .gitconfig (245 chars) === ... (12 lines)
Your git config is at ~/.gitconfig. It sets user.name to...
(2.1s)
Add to your .zshrc for quick access:
alias wref='ref -f .' # ask about the current directoryThen from any project folder:
cd ~/code/recetario
wref "what does this project do?"
wref "how do I run the tests?"
wref "what dependencies does it use?"
cd ~/code/ivd
wref "list all CLI commands"
wref "where is the scanner logic?"ref initscans your machine for common locations (shell configs, code dirs, notes)- You ask a question; the model gets your configured paths
- It uses tools to search, read, and list files -- deciding what to look at
- It loops (up to 8 steps) until it has enough info, then answers
No embeddings, no vector DB, no indexing. The model explores your filesystem with tools.
Config lives at ~/.ref/config.json:
{
"model": "qwen3.5:4b",
"paths": [
{"path": "/home/you/.zshrc", "label": "shell config"},
{"path": "/home/you/code/", "label": "code projects"}
]
}- Read-only: never writes, modifies, or deletes files
- Sandboxed: tools only access paths in your config
-fbypasses sandboxing for the path you explicitly provide- No network access beyond localhost Ollama
MIT