Node.js v20+ runs the backend and CLI git any installer uses it to clone and update npm v8+ installs workspace deps Python 3.10+ only for llama-cpp-python local AI llama-server any for running local GGUF models 👤 For humans
One-click install that creates an app shortcut (Launchpad on Mac, Start Menu on Windows, app launcher on Linux). No terminal required after install. Just click the Asyncat icon like any normal app.
Linux / macOS
Windows (PowerShell)
⌨️ For terminal gremlins
Install asyncat as a global CLI command. You'll type asyncat start in your terminal every time.
npm (global)
Clone manually
The database (data/asyncat.db) is created automatically on first boot. No setup needed.
Streaming chat that reads and writes across your workspace. Build mode scaffolds entire projects.
Drag-and-drop boards with dependencies, time tracking, and multiple views (list, gallery, Gantt, network).
Block-based rich editor with 20+ block types, real-time cursors, version history, and chart blocks.
Events, invites, color codes, and project linking.
XP gamification, streaks, and team leaderboards.
Spaced-repetition flashcards (SM-2), active recall quizzes, and mind maps.
Run asyncat with no args to open the interactive REPL. Tab-completion and command history work out of the box. Type / or help to browse all commands interactively.
asyncat start asyncat start --backend-only asyncat start --frontend-only asyncat stop asyncat restart asyncat status asyncat doctor asyncat logs [backend|frontend|all] asyncat chat [--web] [--think] asyncat run [model] asyncat provider list asyncat provider set local <file.gguf> asyncat provider set cloud <key> [model] asyncat provider set custom <url> <key> asyncat models list asyncat models pull <url> asyncat models serve <file.gguf> asyncat models stop asyncat models rm <file.gguf> asyncat sessions [n] asyncat sessions rm <id> asyncat sessions stats asyncat stash [text] asyncat stash rm <id> asyncat watch <interval> <cmd> asyncat bench [count] <cmd> asyncat history [query] asyncat alias [add|list|rm] asyncat snippets [add|show|rm] asyncat macros [record|play|list|rm] asyncat recent [n] asyncat context asyncat db backup asyncat db reset asyncat config show asyncat config get <KEY> asyncat config set KEY=VALUE asyncat theme [dark|hacker|ocean|minimal] asyncat live-logs [on|off|toggle|status] asyncat install asyncat update asyncat uninstall asyncat open asyncat version asyncat help asyncat exit Edit den/.env to configure the backend. A default file is created by asyncat install.
JWT_SECRET — REQUIRED — change before deploying AI_BASE_URL — your AI provider URL (OpenAI, Anthropic, Ollama, etc) AI_API_KEY — API key for your AI provider AI_MODEL gpt-4o model name (gpt-4o, claude-sonnet-4-5, llama3.1, etc) PORT 8716 backend HTTP port NODE_ENV development set to production when deploying SOLO_MODE true single user (true) or team server mode (false) LOCAL_MODEL_PATH — path to local GGUF model file LLAMA_SERVER_PORT 8765 port for the local AI server Local AI is optional. Cloud providers (OpenAI, Anthropic, Azure) work without it. To run models locally, you need llama-server.
Install options
How it works
asyncat does not auto-start llama.cpp. Load a local model via Settings → AI → Local model in the UI. The backend spawns llama-server on demand at port :8765 using an OpenAI-compatible API.
:8716 Backend (den) PORT :8717 Frontend (neko) — :8765 llama-server LLAMA_SERVER_PORT
Or manually: cd ~/.asyncat && git pull && npm install