- If you're running on a mac and are looking for (a likely incomplete) set of system software prerequisites, here is how I set up my macos dev env.
Make sure both git and pnpm are up to date:
pnpm install
git lfs pull- Pull environment variables from Vercel
pnpm i -g vercel
vercel login
vercel link --project kilocode-app
vercel env pull- Run the database:
cd dev
docker compose up -d- Set up the database
pnpm drizzle migrateYou will need to rerun this every time you pull new migrations.
If you wish to make a new migration, make changes to the schema and run:
pnpm drizzle generate- Run the development server:
pnpm dev- When testing Stripe, start Stripe forwarding
Installation via brew as described in macos dev env
pnpm stripeIMPORTANT: copy the webhook signing secret to .env.development.local, and assign to STRIPE_WEBHOOK_SECRET
Environment variables are managed through Vercel. For local development, pull the latest ones using:
vercel env pullvercel env add FOOvercel env update FOOIn TypeScript, you can use the getEnvVariable helper for consistent access:
import { getEnvVariable } from '@/lib/dotenvx';
console.info(getEnvVariable('FOO')); // barThis application uses JWT (JSON Web Tokens) for API authentication. When a user generates a token through the /api/token endpoint, they receive a signed JWT that includes:
- Subject (user ID)
- Issuance timestamp
- Expiration date (30 days by default)
- kiloUserId (user identifier)
- version (token version, currently 3)
The tokens are signed using the NEXTAUTH_SECRET environment variable, which should be securely set in your deployment environment.
To use a token with the API:
- Obtain a token through the
/api/tokenendpoint (requires user authentication) - Include the token in your API requests using the Authorization header:
Authorization: Bearer your-jwt-token
Each token is validated cryptographically using the secret key to ensure it hasn't been tampered with.
On 25 March 2025, the token format was updated to version 3, to force everyone to log in again, in order to get a consistent situation across Postgres/Orb/Stripe. As part of this the kiloUserId prefix was changed from google: to oauth/google:, to make sure all associations in Orb are fresh.
In March 2025, the token format was updated to version 2, which includes the following changes:
- JWT token field
kiloIdrenamed tokiloUserId - JWT version bumped to 2
- All version 1 tokens are invalidated and users will need to re-authenticate
This change standardizes the naming convention across the application and improves clarity
The ModelSelector component provides a comprehensive interface for selecting AI models and providers. It works by fetching model and provider data from the database through the OpenRouter integration.
- Data Loading: The component uses the
useModelSelectorhook, which internally callsuseOpenRouterModelsAndProviders - API Endpoint: The hook fetches data from the
/api/openrouter/models-by-providerendpoint, which queries themodels_by_providerdatabase table - Data Structure: The API returns a normalized structure with providers that include their models directly, along with comprehensive metadata like data policies, pricing, and capabilities
- Filtering & Selection: The component provides extensive filtering options (by provider location, data policy, pricing, context length, etc.) and allows users to select specific models or entire providers
- Fallback Mechanism: If the API request fails or returns invalid data, the hook falls back to a static backup JSON file (
openrouter-models-by-provider-backup.json) to ensure the application remains functional
The model and provider data is stored in the models_by_provider database table and needs to be periodically synchronized with OpenRouter's API to ensure we have the latest information. The synchronization process populates the database table, which is then served through the API endpoint with edge caching (60 seconds) for optimal performance.
A backup JSON file is maintained for fallback purposes and can be updated using:
pnpm script:run openrouter sync-providers-backupThis script (sync-providers-backup.ts) fetches the latest provider information and models from OpenRouter's API and generates the backup JSON file.
You can generate dependency graphs for any source file using dependency-cruiser. This requires graphviz to be installed (brew install graphviz on macOS).
To generate a dependency graph for a specific file:
npx depcruise src/path/to/file.tsx --include-only "^src" --output-type dot | dot -T svg > /tmp/dependency-graph.svg && open /tmp/dependency-graph.svgFor example, to visualize dependencies for the sign-in page:
npx depcruise src/app/users/sign_in/page.tsx --include-only "^src" --output-type dot | dot -T svg > /tmp/dependency-graph.svg && open /tmp/dependency-graph.svgThe --include-only "^src" flag limits the graph to files within the src directory, excluding external dependencies.
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
The application supports read replicas for multi-region deployment. To test this locally:
- Start both database containers:
cd dev
docker compose up -dThis starts:
- Primary database on port
5432 - Replica database on port
5433
- Run migrations on both databases:
pnpm drizzle migrate
POSTGRES_URL=postgresql://postgres:postgres@localhost:5433/postgres pnpm drizzle migrate- Add to your
.env.development.local:
POSTGRES_REPLICA_US_URL=postgresql://postgres:postgres@localhost:5433/postgres
VERCEL_REGION=sfo1Setting VERCEL_REGION to a US region (sfo1, iad1, pdx1, or cle1) will make the app use the replica for read operations via readDb.
Note: This is a simplified setup where both databases are independent (not actually replicating). This allows testing the code paths and connection logic without setting up true PostgreSQL streaming replication. For production, Supabase handles the actual replication.
‼️ Use this with caution!
You can spin up a dev server to hit our production database. Just run:
pnpm dev:prod-dbLogin with the fake-login provider with an email like:
my-fullname@admin.example.com
To test the app behind an HTTPS tunnel, you can use ngrok. First, install it:
brew install ngrokThen, start the dev server and expose it:
ngrok http 3000This will tell you the URL that's being used, copy it, and write it to your .env.development.local file, like this:
APP_URL_OVERRIDE=https://lucile-unparenthesized-subaurally.ngrok-free.dev/
NEXTAUTH_URL=https://lucile-unparenthesized-subaurally.ngrok-free.dev/Then restart the dev server, and you should be able to access the app behind the tunnel.
Some OAuth providers restrict redirect_uri to be HTTPS, and others explicitly block localhost for security reasons.