Setting Up OpenClaw on a Homelab: What I Expected vs. Reality
I’ve been running a homelab for years. Proxmox, Portainer, a Debian CT or two, SWAG as a reverse proxy — the whole stack. By now, adding a new self-hosted service is pretty routine: pull a Docker image, set a few environment variables, add a local DNS entry, wire up nginx, done. Half an hour at most.
So when I decided to try OpenClaw — a self-hosted AI assistant gateway that connects to your messaging apps — I figured it’d be a weekend afternoon job. Reader, it was not.
What OpenClaw Actually Is
OpenClaw is a self-hosted gateway that connects AI models (Claude, GPT, etc.) to messaging platforms like Telegram and Signal. You run it on your own infrastructure, configure it with API keys and provider settings, and suddenly you have an AI assistant that lives in your messaging app, with access to your files, calendar, home automation, and whatever else you wire up.
The pitch is compelling. The setup is… a journey.
Step 1: There’s No Pre-Built Docker Image
This is the first thing that’ll catch you off guard if you’re used to pulling from Docker Hub. OpenClaw doesn’t publish a ready-made image — you have to build it yourself from the source repo.
So the flow is:
- Clone the repo
- Build the Docker image locally
- Push it to your own Docker Hub account (or private registry)
- Pull it from Portainer like a normal image
Not a dealbreaker, but it adds friction compared to the typical homelab workflow. And if you want to stay up to date with new releases, you’ll need to repeat the build-and-push cycle.
Step 2: The Config Chicken-and-Egg Problem
Here’s the fun part: OpenClaw needs to be configured before it’s really usable, but the onboarding flow requires you to exec into a running container — and it won’t start cleanly without some config.
The solution: add --allow-unconfigured to the command in your Portainer stack definition. This lets the container start up without a complete configuration so you can connect to it and run the onboarding wizard.
Here’s the relevant snippet from my Portainer stack:
command:
- "node"
- "dist/index.js"
- "gateway"
- "--bind"
- "${OPENCLAW_GATEWAY_BIND}"
- "--port"
- "18789"
- "--allow-unconfigured"
# "openclaw"
# "doctor"
# "--fix"
Once the container is running, connect to it via the Portainer UI: click the console icon next to the container — no command needed, it drops you right in. Run the onboarding from there. After you’re configured, remove --allow-unconfigured and redeploy the stack.
This flag isn’t front-and-center in the docs, so if you’re stuck staring at a container that won’t start cleanly, that’s your escape hatch.
One more trick worth knowing: if your config gets into a broken state, OpenClaw has a built-in openclaw doctor --fix command that can repair it. Those commented-out lines in the stack above are a reminder — swap them in if things go sideways.
Step 3: The Browser Node
OpenClaw supports “nodes” — additional agents that extend its capabilities. One of them is a browser node that gives the AI assistant access to a real browser for web automation.
Setting this up on my laptop was its own mini-odyssey, and the culprit turned out to be networking rather than config.
The issue: my gateway runs on a different machine (in Proxmox), and the browser node on my laptop needs to talk back to it. OpenClaw supports two options here:
- Plain HTTP — works, but obviously not great for security
- SSH tunnel — the right approach, but requires explicit setup
I had all the environment variables correctly set, but the connection kept failing. The missing piece was that I hadn’t set the environment variable OPENCLAW_ALLOW_INSECURE_PRIVATE_WS. Once that was in place, everything clicked. Though only a temporary solution until I have a SSH tunnel setup.
If you’re running the gateway on a separate machine from your node, don’t skip the networking step — it’s easy to assume “same LAN = it just works” and lose an hour troubleshooting the wrong thing.
Step 4: The Cost Reality
OpenClaw connects to AI providers via their APIs. I started with Anthropic (Claude) on a monthly plan. I hit the limit faster than expected.
Topping up with API credits is where it gets eye-opening: you can easily spend €100/day if you’re not careful with agentic loops and frequent polling. For comparison, a heartbeat-style setup that checks in periodically and handles your messages is much cheaper than one that’s constantly spinning — but it still adds up.
My current approach:
- Haiku (Claude’s cheapest model) as the default for routine tasks
- Sonnet only when I need more reasoning power
- Switching to the OpenAI API to explore whether their subscription model offers better value
The model routing in OpenClaw is genuinely useful here — you can configure different models for different tasks and keep costs in check.
What I’d Tell Past Me
- Expect to build the image yourself. Set up a simple CI step or script to make rebuilds easy.
- Use
--allow-unconfiguredto bootstrap. Don’t bang your head against a non-starting container. - Set a spend cap in your AI provider’s console immediately. Seriously. Do it before you do anything else.
- The browser node is worth the setup pain if you want the full experience.
- It’s not plug-and-play, but it’s not black magic either. If you’ve set up Vaultwarden, Paperless, or Home Assistant from scratch, you have the skills for this.
Is It Worth It?
Yeah, I think so — but with calibrated expectations. This isn’t your typical docker pull project. It’s closer to setting up something like n8n or Home Assistant for the first time: there’s real power here, but you have to invest some time to unlock it.
The promise of a personal AI assistant that lives in Telegram, has access to my homelab, and can actually do things (not just answer questions) is worth the setup friction. I’m still in the middle of configuring it, but the foundation is solid.
More posts to come as I keep building this out.
Running OpenClaw on: Proxmox → Debian CT → Portainer → Docker. Gateway + Telegram integration live. Browser node on laptop. Model: Claude Haiku (default) / Sonnet (on demand).