How to Install OpenClaw: A Complete Guide

How to Install OpenClaw: A Complete Guide

Installing OpenClaw often fails because people overlook dependencies or underprovision memory. This guide explains how to install OpenClaw while avoiding those common issues and gives a concise hardware and OS checklist, exact commands to validate your environment, and clear steps for the one-line installer, Docker Compose deployment, or a manual pnpm-based flow. Follow the quick checks first to reduce surprises and speed up setup.

Quick summary

  • System checks first: Validate CPU and RAM (2-4GB recommended; 16GB+ for local LLMs), run basic environment checks, and open the gateway port in your firewall before installing.
  • Use the official installer: The one-liner is the fastest supported path; it sets up Node 22+ and sensible defaults, so review the installer URL before piping to shell.
  • Manual install option: Choose pnpm or npm when you need custom Node flags or to inspect the repository, then clone, install dependencies, and set up a systemd or launchd unit for an always-on service.
  • Container deployments: Docker Compose offers isolation and predictable environments; use the repo’s compose files or convenience scripts for containerized installs.
  • Verify and remediate: Confirm installation, check logs or containers, and follow the troubleshooting checklist or request Secure Installation support for production hardening.

System requirements and quick prep

Start by checking hardware, OS, and network settings to avoid common failures caused by missing dependencies or low memory. The checklist below prepares you for the one-liner installer or the Docker path and shows the commands to confirm your environment before you begin. Running these checks up front cuts the time you spend troubleshooting mid-install.

Minimum requirements are modest but matter for stability: 1 vCPU, 1GB RAM, and roughly 500MB storage for testing only. For reliable use, aim for 2 vCPU, 2-4GB RAM, and 2-10GB storage; running local LLMs requires 16GB+ RAM and a CUDA-capable GPU. Supported platforms include Ubuntu 22.04+ (24.04 recommended), macOS on Apple Silicon, and Windows via WSL2 for a native-like Linux experience.

Run these commands to verify the basics: node -v, free -h, and df -h. Confirm dependencies with docker --version, docker compose version, pnpm -v, and git --version. Check firewall status with ufw status or firewall-cmd --state and open the gateway port if you plan remote access (default UI is at http://127.0.0.1:18789/).

With those checks done, choose an installation path: the one-liner installer, Docker Compose, or a manual pnpm-based setup. The sections below walk through each option and explain the flags and outcomes so you know what changes on your host.

Official installer (fast path)

The official installer is the fastest supported way to get OpenClaw running with sensible defaults and an onboarding flow. Review the installer script in your browser before piping it to a shell, and avoid running scripts as root unless you have inspected them and trust the source. For most users the one-line script reduces setup steps and config errors. See the installer documentation for the latest flags and behavior.

Run the installer on Linux or macOS with: curl -fsSL https://openclaw.ai/install.sh | bash. For CI or non-interactive installs download and run with flags: curl -fsSL https://openclaw.ai/install.sh -o install.sh && bash install.sh --no-onboard --install-daemon.

Use --no-onboard to skip interactive prompts and configure via environment variables or later manual setup. Use --install-daemon to register a system service (systemd) so OpenClaw runs on boot; without it the installer leaves a user-mode binary you manage with your own process manager.

Windows users can run the PowerShell installer: iwr -useb https://openclaw.ai/install.ps1 | iex and set Set-ExecutionPolicy RemoteSigned if PowerShell blocks execution. For a Linux-like environment on Windows use WSL2 to avoid common path and permission differences. If the native path proves difficult, use the Docker image as a straightforward fallback.

Manual install and always-on service

Choose a manual install when you need custom Node flags, a different install prefix, or want to inspect the repository before running code. The steps below show how to clone the source, install dependencies, and run the gateway locally for debugging and development. Manual installs give you direct control over runtime options and upgrades.

Start by cloning the source: git clone https://github.com/openclaw/openclaw.git, change into the directory, and switch to the branch you want. Install dependencies with pnpm install or npm install. The CLI keeps runtime files under your home directory by default (for example ~/.openclaw), and you can run the onboarding wizard manually with openclaw onboard.

Local LLMs may require CUDA and a GPU.

To run OpenClaw continuously on Linux, create a systemd unit that runs under your user and points at the install directory. Set User=yourusername, WorkingDirectory=/home/yourusername/.openclaw, and an ExecStart that launches your start script or the CLI. After adding the unit file run sudo systemctl daemon-reload and sudo systemctl enable --now openclaw, then inspect logs with journalctl -u openclaw -f. If you want an example systemd layout to model, several community guides show recommended fields and restart settings.

On macOS use a launchd plist or brew services to achieve always-on behavior, placing environment keys in the plist or a wrapper script that exports them. For step-by-step macOS-specific notes see the set up and run OpenClaw on Mac guide. Tail logs with the Console app or system log commands to verify startup events. Next, apply production hardening and TLS options described later so your gateway is persistent and secure.

Docker and containerized deployments

Containers simplify deployment by isolating the gateway from host differences and making rollbacks predictable. The repository includes a convenience script and a Docker Compose layout that gets you running in minutes across hosts.

Minimal flow: clone the repo and run the setup script, or build and run with Docker directly. Example: git clone https://github.com/openclaw/openclaw.git && ./docker-setup.sh or docker build -t openclaw:local -f Dockerfile . && docker compose up -d openclaw-gateway. For a hosted walkthrough and alternate hosting suggestions see Hostinger’s setup guide.

The setup script writes the gateway token into your local environment so the gateway can bootstrap; by default the token lands in the project .env or in ~/.openclaw/workspace when you opt for a user workspace. Check runtime status with docker compose logs -f openclaw-gateway and docker compose ps. For safety, store API keys and secrets in an external env file or a secrets manager and never commit them to the repo.

Some host platforms impose networking or sandbox limits, so confirm your provider supports the required port mappings and volume mounts. Local LLM providers such as Ollama can be used as custom endpoints by pointing OpenClaw to them during onboarding. If you need a tailored compose file or platform-specific guidance before production, contact OPENCLAW INSTALL (VPS or LOCAL COMPUTER) for targeted assistance.

Verify, troubleshoot and next steps

After installation run a short verification sweep before declaring the gateway healthy. Check runtime and onboarding status, then examine logs: for container deployments check docker compose ps and follow live logs with docker logs -f; for systemd installs tail journalctl -u openclaw -f to watch startup events.

Open the gateway at http://127.0.0.1:18789/ and copy the gateway token from your workspace file into the UI to finish registration. Verify file permissions so the service user can read the workspace and token, and confirm no other process is using port 18789 or 80/443. Test any API keys (OpenAI or Anthropic) with a minimal API call to confirm format and quota. If you are integrating with Google Cloud or similar services, review a focused walkthrough such as the DigitalOcean tutorial on connecting Google to OpenClaw for best practices.

Quick checks catch most problems and speed up diagnosis. Use this checklist to triage fast:

  • Permissions: service user can read the workspace and token files
  • Ports: 18789 and any proxy ports are free and bound to the gateway
  • API keys: test with a minimal API call to confirm quota and format

Port conflicts require freeing or remapping the port used by the gateway. If onboarding fails, rerun openclaw onboard and rotate a broken token before collecting diagnostics.

When a problem persists, collect diagnostic data before escalation: openclaw status, the last 200 lines from journalctl, or the relevant docker logs, plus the exact commands you ran. Save logs and command history so specialist troubleshooting is faster. Attach these files when you contact support to reduce back-and-forth.

For production hardening enable TLS via a reverse proxy, restrict gateway access behind a firewall, enforce RBAC or SSO where available, and configure automated backups and log rotation. If you prefer a hands-off rollout, About OpenClaw911 describes services that include installs, firewall tuning, and system hardening under an SLA and offers expert-level support with a 24-48 hour turnaround. Collect the verification output described above before requesting help to speed the engagement.

How to install OpenClaw: next steps

This guide covers how to install OpenClaw using the official installer, Docker Compose, or a manual pnpm setup. Start by confirming your environment meets the memory and dependency checklist, then pick the installer path that matches your needs. For quick access to company resources and updates visit the Home, OpenClaw911.com.

Scroll to Top