Your finances, all in one place — with an AI that handles the heavy lifting so you don't have to.
Bill Helper is a personal finance ledger with a built-in AI assistant. Track entries, categorize spending, reconcile accounts, and ask your agent anything — from a receipt scan to a month-end summary. Everything lives together, everything talks to each other, and you stay in control the whole time.
Chat with the agent like a colleague. Drop in a receipt photo, a PDF bank statement, or just describe what happened — the agent reads your ledger, thinks it through, and proposes changes. You see every diff. You approve or reject. Nothing ever lands without your say-so. No babysitting your data, no manual grunt work.
Entries, accounts, entities, tags, groups, spending analytics, reconciliation — it's all connected. The agent has the full picture and can act across any of it in a single conversation.
The dashboard breaks your money into filter groups — day-to-day, one-time, fixed, transfers, income — and renders timelines, category breakdowns, and trends. Actually understand where your money goes, month by month.
Upload a bank PDF or a receipt image. The agent parses it with Docling OCR, reasons about the contents, and returns structured entries ready for your review. Categorizing a month of transactions takes minutes, not hours.
Attach balance snapshots to any account. Get an interval-by-interval view of what the bank says changed vs. what you tracked — with a clear delta you can act on.
- Web app — full-featured React interface
- Telegram — quick entry capture and queries from your phone
- iOS — SwiftUI app for on-the-go access (partial, actively expanding)
Plug in any LiteLLM-compatible provider: Anthropic, OpenAI, OpenRouter, AWS Bedrock, or anything behind a compatible API. You're not locked into one model or one vendor — but yes, the agent calls an external API. Fully local model support is on the roadmap.
- Python 3.13+
- Node.js 18+
uv- Docker (for the AI workspace — skip with
BILL_HELPER_AGENT_WORKSPACE_ENABLED=0)
git clone https://github.com/ScottCTD/bill_helper.git
cd bill_helper
uv sync
cd frontend && npm install && cd .../scripts/setup_shared_env.sh --cleanOpen the generated .env and add your LLM provider key:
AWS_BEARER_TOKEN_BEDROCK=... # Bedrock
# or OPENROUTER_API_KEY=... # OpenRouter
# or OPENAI_API_KEY=... # OpenAI
# or ANTHROPIC_API_KEY=... # AnthropicFull reference: docs/development.md · .env.example
uv run alembic upgrade headdocker build -t bill-helper-agent-workspace:latest -f docker/agent-workspace.dockerfile .This packages the bh CLI and a browser IDE into an isolated Docker container where the agent runs. Rebuild it whenever you change backend or bh CLI code.
uv run python scripts/bootstrap_admin.py --name admin --password admin./scripts/dev_up.shTelegram polling is skipped by default. To opt in:
./scripts/dev_up.sh --with-telegram| Surface | URL |
|---|---|
| 🌐 Web app | http://localhost:5173 |
| ⚡ API | http://localhost:8000/api/v1 |
| 📖 API docs | http://localhost:8000/docs |
Sign in at /login and you're live.
All settings use the BILL_HELPER_ prefix and can be set via .env or the in-app Settings page.
| Variable | Default | Description |
|---|---|---|
BILL_HELPER_AGENT_MODEL |
bedrock/us.anthropic.claude-haiku-4-5-20251001-v1:0 |
LiteLLM model string |
BILL_HELPER_AGENT_MAX_STEPS |
100 |
Max tool-call steps per run |
BILL_HELPER_AGENT_WORKSPACE_ENABLED |
true |
Enable per-user Docker workspace |
BILL_HELPER_AGENT_WORKSPACE_IMAGE |
bill-helper-agent-workspace:latest |
Workspace Docker image tag |
BILL_HELPER_AGENT_WORKSPACE_DOCKER_BINARY |
docker |
Docker CLI binary path |
BILL_HELPER_WORKSPACE_BACKEND_BASE_URL |
http://host.docker.internal:8000/api/v1 |
API URL reachable from inside the workspace |
BILL_HELPER_DEFAULT_CURRENCY_CODE |
CAD |
Default currency for new entries |
BILL_HELPER_DASHBOARD_CURRENCY_CODE |
CAD |
Currency shown in the dashboard |
CURRENT_USER_TIMEZONE |
America/Toronto |
Timezone for agent date context |
Bill Helper is a prototype with a clear vision. Here's what's actively being planned or thought about:
The agent needs to be tested against a diverse set of real-world scenarios — complex receipts, multi-currency statements, ambiguous descriptions, bulk imports, edge-case categorization. The goal is a reproducible benchmark suite that measures proposal quality, step count, and accuracy across the full feature surface. This is a high-priority next step before expanding the model catalog.
Connect Gmail and Outlook mailboxes to automatically surface transaction-related emails (bank alerts, receipts, invoices) as import candidates. The agent would parse each email, propose entries, and route them through the standard review workflow — no automated writes, same approval model as today.
A single docker compose setup that bundles the backend, pre-built frontend static files, optional Telegram bot, and the agent workspace image. Goal: one command to run a fully production-ready self-hosted instance on any machine with Docker.
The iOS app currently covers roughly 15 of ~60 API endpoints — read-only views, basic navigation, no real auth flow. The plan is to close that gap: entry creation, full agent interaction, account management, and proper session handling.
LiteLLM handles most of the model abstraction today, but the OpenAI Responses API (vs. the Completions API) unlocks streaming improvements and new capabilities. Adding first-class support is on the list.
An optional lightweight SQLite inside the per-user sandbox — not a replica of the authoritative ledger, but a scratchpad the agent can use to cache context, run exploratory queries, and reason across multi-step tasks without hammering the API.
Automated ingestion from bank exports and CSV files — every imported transaction still goes through the review pipeline before landing in the ledger.
Multi-currency support with live or cached exchange rates so the dashboard can present a unified view across currencies.
Right now the agent depends on an external LLM API. The goal is to support locally-hosted models (Ollama and similar) so the entire stack — app, agent, and model — can run completely offline on your own hardware.
# Backend only
uv run bill-helper-api
# Frontend only
cd frontend && npm run dev
# Backend tests (fast)
OPENROUTER_API_KEY=test uv run pytest backend/tests -q -m "not workspace_docker"
# Backend workspace tests (requires Docker)
OPENROUTER_API_KEY=test uv run pytest backend/tests/test_agent_workspace.py -q -m workspace_docker
# Frontend tests
cd frontend && npm run test
# Frontend e2e (Playwright)
cd frontend && npm run test:e2e
# Design and docs consistency checks
uv run python scripts/check_llm_design.py
uv run python scripts/check_docs_sync.py
# Rebuild workspace image after backend / bh changes
docker build -t bill-helper-agent-workspace:latest -f docker/agent-workspace.dockerfile .backend/ FastAPI application — routers, services, models, agent runtime
frontend/ React + Vite web app
src/features/ Feature modules (agent, dashboard, entries, accounts, …)
ios/ SwiftUI iOS app (partial coverage)
telegram/ Telegram bot transport
alembic/ Database migrations
docker/ Dockerfiles, including the agent workspace image
scripts/ Dev, seed, and maintenance scripts
docs/ Extended documentation
- Docs index
- Architecture
- Backend · Frontend
- API reference
- Features
- Data model
- Development guide
- Completed tasks archive
- All API routes are Bearer-token protected.
- Admins can manage users, sessions, and run impersonation from
/admin. - Agent uploads are stored per-user under
{data_dir}/user_files/{user_id}/uploads. - Playwright e2e tests spin up the backend against a disposable copy of the data dir — your primary database is never touched.
- Telegram supports both bearer token (
TELEGRAM_BACKEND_AUTH_TOKEN) and custom proxy headers (TELEGRAM_BACKEND_AUTH_HEADERS).
MIT