Installation and Model Selection: Set Up OpenClaw the Right Way
Where you install OpenClaw determines how much it can do. The model you pick determines how often it does it right. Get both wrong and you are not running an agent — you are running an expensive autocomplete.
Two decisions determine 80 percent of your OpenClaw outcomes before you write a single prompt.
Where you install it. And which model you give it.
Get these right and everything else is tunable. Get these wrong and you are fighting the architecture from day one.
Decision One: Local Machine, Not a VPS
The most common mistake I see in OpenClaw setups is installing it on a VPS — a remote cloud server from Hostinger, Digital Ocean, Vultr, or similar.
I understand the appeal. It feels more "production." A dedicated server. Always on. Separate from your main machine.
It is the wrong call.
Here is the concrete comparison:
| Local Machine | VPS | |
|---|---|---|
| OS access | Full | Restricted |
| File access | Complete | Only what you mount |
| Local tools (git, CLI, scripts) | Native | Requires installation + config |
| Security model | Your perimeter | Shared infrastructure |
| Latency to your data | Zero | Network round-trip |
| Cost | Free | $5–20/mo |
The VPS use case only makes sense if you need the agent running while your local machine is off — and there are better solutions for that (dedicate a cheap always-on host like a Mac Mini or NUC).
My setup: a single always-on host running 24/7. Always on. Full OS access. Everything works.
Installation: Three Minutes, One Command
Go to openclaw.ai. Copy the install command. Paste it in your terminal. Hit enter.
That is it. OpenClaw installs, configures its directory structure, and drops you into a first-run setup flow. You will be asked for your model API key and your messaging channel token.
The install creates a directory — usually ~/.openclaw/ — containing:
workspace/— agent memory files (USER.md, MEMORY.md, daily logs)skills/— your Skills libraryconfig/— model config, channel tokens, environment variables
Decision Two: Model Selection
Model selection is not a one-size-fits-all decision. It is a cost-reliability-risk tradeoff.
Here is the full spectrum:
Tier 1: Claude Opus 4.6 (Recommended)
Claude Opus 4.6 is the best model available for agent work. The task completion rate on complex, multi-step operations is near 100 percent — compared to roughly 20 percent for the nearest OpenAI equivalent. It has personality, which matters for consistent tone in outputs. It handles long context, tool use, and ambiguous instructions better than anything else.
There are two ways to use it:
OAuth (browser login): Connect your Anthropic account. Fast to set up, no cost beyond your subscription. Risk: Anthropic has terms against programmatic automation via OAuth. This may work for months without issue — or you may find your account flagged. This is a real risk. Know it.
API key: Direct API access. Billed per token. Safer long-term. At current Opus 4.6 pricing, a well-configured OpenClaw running normal workloads typically incurs a modest monthly cost that scales with throughput — verify current rates at anthropic.com/pricing. You control the ceiling.
Tier 2: OpenAI ChatGPT (Middle Option)
OpenAI actively encourages OAuth use and will not penalize you for it. $20/month via ChatGPT Plus.
The tradeoff: task completion on complex one-shot goals drops to roughly 20 percent with GPT-4o compared to Claude. For simple, well-defined automations, it is adequate. For anything requiring multi-step reasoning or ambiguous judgment, you will spend time debugging failures.
Use this if cost is the primary constraint and your automations are simple.
Tier 3: Chinese Frontier Models (Budget)
Models like Kimmy K2.5 and MiniAX are available via their respective providers at roughly $10/month. They are genuinely capable — stronger than GPT-3.5 era models — and viable for structured, well-defined tasks.
Tradeoffs: data residency (your prompts route through Chinese infrastructure), less consistent on complex reasoning, smaller context windows on some providers.
Free Tier: Open Router Free Models
Available but not recommended for production. Performance is unreliable, latency is high, and the task completion rate on autonomous agent work is poor. Use for experimentation only.
My Current Stack
I route by complexity. Simple classification tasks use the cheapest model. Production automations use Opus. This is the cost discipline lesson applied to model selection — you do not need the most expensive model for every token.
Lesson 131 Drill
Before the next lesson:
- Install OpenClaw via the one-line command at
openclaw.ai - Decide your model stack: API key or OAuth? Which tier?
- Open
~/.openclaw/workspace/USER.mdand add a paragraph about who you are, your goals, and your top three workflow pain points
Step 3 is not optional. This file is how OpenClaw maintains context across sessions. An empty USER.md means a perpetually blank slate agent. Fill it now.