/docs
Run the same daemon I run. Local-first. Pre-alpha. Tools for multi-project founders.
Architecture
One long-running Python daemon. Seven surfaces.
Operator Core is a single process that boots seven threads. Each is optional — drop any of them with a CLI flag — and failure in one does not take down the rest.
operator run
|
+---------------+----------+----------+---------------+
| | | |
HTTP hooks Scheduler Snapshot publisher Discord bot
(:8765) (cron) (30-min -> Supabase) (optional)
| | | |
+------+--------+---------------------+-------+-------+
| |
JobStore (sqlite) operator-site /kruz
~/.operator/data/ (public broadcast)
hooks -> /api/hooks/* (Claude Code hook endpoints)
status -> /api/status (local read-only status)
metrics -> /api/metrics (prometheus-flavored)
remote -> /api/remote (remote trigger bridge)
ops UI -> /ops (local dashboard)
Data at rest: ~/.operator/config.toml is the only knob the daemon reads; ~/.operator/data/ holds the sqlite ledger, status.json, scheduler state, and rotating logs. Nothing else on your box is touched.
Install
Python 3.11+. Works on macOS, Linux, Windows.
Install from source today (PyPI release coming). Clone the repo and use an editable install so updates are git pull away.
git clone https://github.com/kjhholt-alt/operator-core.git cd operator-core pip install -e .[discord,status] # optional extras operator version # sanity check
Config
One TOML file. Versionable. No secrets inside.
operator init # writes ~/.operator/config.toml $EDITOR ~/.operator/config.toml # fill in projects_root + projects operator doctor # validate env + connectivity
Secrets (Supabase keys, Discord webhooks, bot tokens) live in environment variables — the daemon reads .env from the working directory on startup.
Run the daemon
Foreground by default. Use your OS scheduler for background.
operator run # foreground, all surfaces on operator run --once # boot + one snapshot + exit operator run --no-discord --no-scheduler # surfaces off for debugging operator snapshot # publish one snapshot now
For always-on operation, wire it into the OS:
- • Windows:
scripts/Register-Operator.ps1registers a respawn-every-5-min Task Scheduler job with a pid guard (no elevation required). - • macOS: a launchd plist shipping with operator-core is coming; today, run under
brew servicesortmux. - • Linux: systemd user unit recommended — template in the repo.
Scheduled tasks
The cron inside the daemon. One command away from anywhere.
A task is a named action the scheduler fires on a cadence (daily, weekly, monthly). Built-in tasks cover the morning loop; add your own by editing ~/.operator/data/schedule.json or via the Discord bot. Toggle any task without editing source:
operator tasks list # table of every task + state operator tasks run morning-briefing # run it now, out-of-cadence operator tasks disable marketing-pulse # stop cadence, keep registered operator tasks enable marketing-pulse # resume
Cookbook / common recipes expressed as tasks:
CLI reference
Every verb. Every flag.
operator init # bootstrap ~/.operator/config.toml operator config path # print config file path operator config show # print parsed config + env check operator doctor # validate config + env + connectivity operator run [flags] # start the daemon --host <ip> --port <n> --no-discord --no-scheduler --no-snapshot --once --snapshot-interval <s> --log-level <debug|info|warn|error> --log-file <path> operator snapshot # publish one snapshot to Supabase operator snapshot --dump # print the JSON payload, don't send operator tasks list [--json] operator tasks run <key> operator tasks enable <key> operator tasks disable <key> operator status [--once] [--json] # terminal dashboard (Rich or ASCII) operator version
Integrations
Operator talks out. Nobody calls in.
- • Discord — per-channel webhook URLs in env; posts morning briefings, PR reviews, deploy alerts, task results. Optional bot for slash commands.
- • Supabase — snapshots get POSTed to a public-read table so the /kruz page can render them server-side.
- • Vercel — deployment webhooks hit
/api/webhooks/verceland relay to your #deploys channel with HMAC verification. - • Claude Code — hook endpoints at
http://127.0.0.1:8765/api/hooks/*receive SessionStart / PreToolUse / PostToolUse events for lifecycle observability.
Philosophy
Why this exists. What it will and will not be.
- • Local-first. Your data stays on your machine. The /kruz broadcast is an opt-in, sanitized slice — never prompts, secrets, or PR URLs.
- • No telemetry. No phone-home. No analytics SDKs. You publish when you choose to.
- • Dogfood-first. The roadmap is what I need for my own portfolio. If it's useful to you, great. If it isn't, the source is yours to fork.
- • No lock-in. Config is TOML. State is sqlite + JSON. Snapshots are Postgres rows you own. Nothing proprietary between the daemon and your tools.