The quickest path from zero to a running clawq instance takes about five minutes. This guide covers building from source, configuring a provider and channel, and starting the daemon.

Prerequisites

  • opam — the OCaml package manager
  • libsqlite3-dev (Debian/Ubuntu) or equivalent for your distribution
  • An LLM API key from OpenRouter, OpenAI, or any OpenAI-compatible provider
  • A channel token — Telegram bot token (from @BotFather), Discord bot token, or Slack app token

1. Bootstrap and Build

# Clone the repository
git clone <repo-url> clawq && cd clawq

# Create opam switch "clawq-5.1" and install all dependencies
make bootstrap

# Build the project
make build

# Run tests to verify the build
make test

The fastest way to configure clawq is with the onboarding wizard:

clawq onboard

The wizard walks through every section — provider, model, security, channels, gateway, and memory — and writes ~/.clawq/config.json at the end. You can re-run it any time.

Alternatively, use the config wizard directly:

clawq config wizard

3. Manual Configuration

Using config set commands

Set individual values by dot-path:

clawq config set providers.0.api_key "sk-or-v1-YOUR_KEY_HERE"
clawq config set providers.0.base_url "https://openrouter.ai/api/v1"
clawq config set providers.0.model "openai/gpt-4o"
clawq config set channels.telegram.bot_token "7123456789:AAF1k_YOUR_TOKEN_HERE"
clawq config set channels.telegram.allow_from '["*"]'

Review the result:

clawq config show           # full config, secrets redacted
clawq config show channels  # one section
clawq config get providers.0.model  # single value

Editing config.json directly

$EDITOR ~/.clawq/config.json

Minimal working configuration:

{
  "providers": [
    {
      "name": "openrouter",
      "api_key": "sk-or-v1-YOUR_KEY_HERE",
      "base_url": "https://openrouter.ai/api/v1",
      "model": "openai/gpt-4o"
    }
  ],
  "channels": {
    "telegram": {
      "enabled": true,
      "bot_token": "7123456789:AAF1k_YOUR_TOKEN_HERE",
      "allow_from": ["*"]
    }
  }
}

Configuration notes:

  • allow_from: ["*"] allows all users. To restrict access, list specific chat IDs: ["123456789", "987654321"].
  • base_url: Use https://api.openai.com/v1 for OpenAI directly, or any compatible endpoint.
  • model: Default model for this provider. Can be overridden per-request.

4. Validate

Check your configuration for common issues:

clawq doctor

You should see doctor: all checks passed. If there are warnings, fix the noted issues.

Initialize workspace prompt files:

clawq workspace init

Check the full runtime status:

clawq status

5. Start the Daemon

clawq agent

You should see output like:

clawq: [INFO] clawq daemon starting (pid=12345)
clawq: [INFO] Starting Telegram polling for account 'main'
clawq: [INFO] Daemon ready. Gateway on 127.0.0.1:13451

The daemon runs in the foreground. Press Ctrl+C to stop it. To run in the background:

clawq agent &

6. Verify

Test the HTTP gateway:

curl http://127.0.0.1:13451/health
# {"status":"ok"}

If you configured Telegram, open Telegram and message your bot. Built-in commands:

CommandAction
/startWelcome message
/helpShow available commands
/newReset conversation history

The bot maintains conversation history per chat, so follow-up questions work naturally.

Troubleshooting

Bot does not respond:

  • Check the daemon is running and showing Starting Telegram polling
  • Verify your bot token with clawq doctor
  • Check that allow_from includes your chat ID (or is ["*"])

“No providers configured” warning:

  • Ensure providers is set in ~/.clawq/config.json with a valid API key

LLM API errors:

  • Verify your API key is valid and has credits
  • Try a different model (e.g., openai/gpt-3.5-turbo for lower cost)
  • Check that base_url matches your provider

Permission denied / config not found:

  • Run clawq onboard to create the config directory
  • Check ~/.clawq/config.json exists and is readable