← asyncat

docs

get asyncat running in about two minutes.

Requirements

Node.js v20+ runs the backend and CLI
git any installer uses it to clone and update
npm v8+ installs workspace deps
Python 3.10+ only for llama-cpp-python local AI
llama-server any for running local GGUF models

Install

👤 For humans

One-click install that creates an app shortcut (Launchpad on Mac, Start Menu on Windows, app launcher on Linux). No terminal required after install. Just click the Asyncat icon like any normal app.

Linux / macOS

curl -fsSL https://asyncat.com/install.sh | sh

Windows (PowerShell)

irm https://asyncat.com/install.ps1 | iex

⌨️ For terminal gremlins

Install asyncat as a global CLI command. You'll type asyncat start in your terminal every time.

npm (global)

npm install -g @asyncat/asyncat

Clone manually

git clone https://github.com/asyncat-oss/asyncat-oss cd asyncat-oss && npm install

First run

$ asyncat install checks deps, creates .env, prompts for llama.cpp
$ asyncat start starts backend (:8716) and frontend (:8717)
http://localhost:8717 open in browser

The database (data/asyncat.db) is created automatically on first boot. No setup needed.

Features

AI Command Center

Streaming chat that reads and writes across your workspace. Build mode scaffolds entire projects.

Kanban

Drag-and-drop boards with dependencies, time tracking, and multiple views (list, gallery, Gantt, network).

Collaborative Notes

Block-based rich editor with 20+ block types, real-time cursors, version history, and chart blocks.

Calendar

Events, invites, color codes, and project linking.

Habit Tracker

XP gamification, streaks, and team leaderboards.

Study Lab

Spaced-repetition flashcards (SM-2), active recall quizzes, and mind maps.

CLI reference

Run asyncat with no args to open the interactive REPL. Tab-completion and command history work out of the box. Type / or help to browse all commands interactively.

asyncat start
asyncat start --backend-only
asyncat start --frontend-only
asyncat stop
asyncat restart
asyncat status
asyncat doctor
asyncat logs [backend|frontend|all]
asyncat chat [--web] [--think]
asyncat run [model]
asyncat provider list
asyncat provider set local <file.gguf>
asyncat provider set cloud <key> [model]
asyncat provider set custom <url> <key>
asyncat models list
asyncat models pull <url>
asyncat models serve <file.gguf>
asyncat models stop
asyncat models rm <file.gguf>
asyncat sessions [n]
asyncat sessions rm <id>
asyncat sessions stats
asyncat stash [text]
asyncat stash rm <id>
asyncat watch <interval> <cmd>
asyncat bench [count] <cmd>
asyncat history [query]
asyncat alias [add|list|rm]
asyncat snippets [add|show|rm]
asyncat macros [record|play|list|rm]
asyncat recent [n]
asyncat context
asyncat db backup
asyncat db reset
asyncat config show
asyncat config get <KEY>
asyncat config set KEY=VALUE
asyncat theme [dark|hacker|ocean|minimal]
asyncat live-logs [on|off|toggle|status]
asyncat install
asyncat update
asyncat uninstall
asyncat open
asyncat version
asyncat help
asyncat exit

Configuration

Edit den/.env to configure the backend. A default file is created by asyncat install.

JWT_SECRET REQUIRED — change before deploying
AI_BASE_URL your AI provider URL (OpenAI, Anthropic, Ollama, etc)
AI_API_KEY API key for your AI provider
AI_MODEL gpt-4o model name (gpt-4o, claude-sonnet-4-5, llama3.1, etc)
PORT 8716 backend HTTP port
NODE_ENV development set to production when deploying
SOLO_MODE true single user (true) or team server mode (false)
LOCAL_MODEL_PATH path to local GGUF model file
LLAMA_SERVER_PORT 8765 port for the local AI server

Local AI (llama.cpp)

Local AI is optional. Cloud providers (OpenAI, Anthropic, Azure) work without it. To run models locally, you need llama-server.

Install options

# pip (easiest, requires Python 3.10+) pip install "llama-cpp-python[server]"
# pre-built binary (no Python needed) # download from github.com/ggml-org/llama.cpp/releases # place at ~/.local/bin/llama-server and chmod +x

How it works

asyncat does not auto-start llama.cpp. Load a local model via Settings → AI → Local model in the UI. The backend spawns llama-server on demand at port :8765 using an OpenAI-compatible API.

Ports

:8716 Backend (den) PORT
:8717 Frontend (neko)
:8765 llama-server LLAMA_SERVER_PORT

Updating

$ asyncat update git pull + npm install in all workspaces
$ asyncat doctor verify everything is healthy after update

Or manually: cd ~/.asyncat && git pull && npm install