Deploying UsefulKey as a service
Run a central key service with Docker, configure adapters, and operate it in production.
When to run as a service
- Use a central service when multiple apps must share the same keys/policies.
- Otherwise, embedding UsefulKey in-process is simplest.
Docker Compose (Postgres + Redis optional)
version: '3.9'
services:
usefulkey:
image: node:20-alpine
working_dir: /srv/app
environment:
DATABASE_URL: ${DATABASE_URL}
REDIS_URL: ${REDIS_URL}
NODE_ENV: production
volumes:
- ./:/srv/app
command: node dist/server.js
ports:
- "3000:3000"
postgres:
image: postgres:16
environment:
POSTGRES_PASSWORD: postgres
ports: ["5432:5432"]
redis:
image: redis:7
ports: ["6379:6379"]
Your server.js
should export HTTP endpoints backed by a usefulkey
instance configured with production adapters:
import { usefulkey, PostgresKeyStore, RedisRateLimitStore, ConsoleAnalytics } from 'betterkey';
export const uk = usefulkey({
adapters: {
keyStore: new PostgresKeyStore({ connectionString: process.env.DATABASE_URL! }),
rateLimitStore: new RedisRateLimitStore({ url: process.env.REDIS_URL! }),
analytics: new ConsoleAnalytics(),
},
});
Health and readiness
- Expose
/healthz
that verifies DB connectivity or checks adapter.ready
when present. - Await
uk.ready
during boot if your plugins perform async setup.
Graceful shutdown
- Drain incoming requests, then close DB/Redis connections.
- If using analytics with batching (e.g., ClickHouse), flush pending events on shutdown.
Scaling notes
- Stateless processes can scale horizontally; use distributed adapters (Redis/Postgres) for rate limits and keystore.
- Prefer idempotent handlers; retries are common in platform orchestrators.
Security
- Terminate TLS at the proxy or run the service behind an API gateway.
- Avoid logging plaintext keys; log only the stable
id
and key prefix.