I Spent the Weekend Wiring Up a Multi-Agent Setup. Here’s What Actually Happened.

I now run multiple AI agents. Not because I planned to. Because one couldn’t do everything I needed.

This weekend I got two agents authenticated with different providers — one on OpenAI OAuth, one on Claude OAuth — both running through OpenClaw. A third is on the drawing board but I’m holding off until the use case is clear.

Why multiple agents?

Different jobs need different brains. Some tasks need fast, cheap responses. Others need deep reasoning. Running everything through one model is like hiring one person to do accounting, copywriting, and IT support.

OpenClaw lets you spin up separate agents, each with their own model, credentials, and messaging channels. One talks to me on Telegram. The other handles different workflows. They don’t share context or interfere with each other.

What is OpenClaw, quickly

Open-source framework that turns AI models into personal assistants you talk to through WhatsApp, Telegram, Slack, or a web interface. Runs on your own hardware. You pick the model, the chat app, and the tools. Like building your own ChatGPT, except you own the whole thing.

Agent #1: OpenAI OAuth on a VPS

This agent runs on a Hostinger VPS (KVM 2 — basic but enough). Previously it was on Gemini 2.0 Flash, which worked but the API costs were adding up. Switching to OpenAI OAuth means using my existing subscription instead of pay-as-you-go. Real savings when you’re running agents around the clock.

The setup was not straightforward. Hostinger’s one-click OpenClaw deployment injects API keys through Docker environment variables. That’s actually a good security pattern — keys stay at the Docker layer, never inside OpenClaw’s config files. But it created a problem I didn’t expect.

I ran the OpenAI OAuth login inside the container using openclaw config. The OAuth flow itself worked fine — it’s PKCE-based and headless-friendly (you paste the redirect URL manually since there’s no browser on a VPS). Credentials landed. But the agent kept using the old model from the Docker env vars. It was ignoring my new OAuth tokens entirely.

Turns out OpenClaw has a credential priority order. Environment variables from Docker beat the config file credentials. The agent was finding the API key in the Docker env and never looking further. I had to dig into the container’s config, understand how auth-profiles.json works, and figure out how to get OpenClaw to prefer the OAuth profile over the injected env vars.

Not hard once you know what’s happening. But “not hard once you know” is the story of every infra problem ever.

Agent #2: Claude OAuth on a Mac Mini

Different machine, different provider. The Claude auth flow is simpler — generate a token with claude setup-token, paste it into OpenClaw, done.

The catch: Anthropic updated their terms this year. Using subscription credentials outside Claude Code is a grey area. They’ve said they won’t cancel accounts for personal use, but commercial use should go through API keys. I’m comfortable with personal automation. Read the terms yourself and decide.

Other gotcha: if Claude Code runs on the same machine, one can get logged out when the other refreshes its token. Safest setup is separate agents on separate machines — which is what I ended up with anyway.

The agent I didn’t build

I was considering a third agent using Claude Code channels — one that can edit repos, run scripts, manage files. But I don’t have a clear use case yet. Building infrastructure for a problem you don’t have is how side projects die. When the need shows up, I’ll build it.

What I’d tell someone starting from zero

One agent, one provider, one channel. Live with it for a week. You’ll discover what’s missing fast.

A cheap VPS works. Most compute happens on the provider’s side. Your server is just the relay.

OAuth beats API keys for personal use. Subscription rates instead of per-token billing. But read the terms — providers are still sorting out their policies.

Keep agents separate. Different machines or at least different profiles. Credential conflicts are real and annoying.

Credit where it’s due

A lot of my thinking here comes from Matthew Berman’s video on OpenClaw best practices — 14 use cases covering model routing, threaded chats, cron jobs, and logging. If you’re running OpenClaw or thinking about it, it’s the best single resource I’ve found.

His model routing approach — different models for different task types instead of pointing everything at one expensive frontier model — is what pushed me to split agents across providers. Worth a watch.

Writing about AI without sounding like AI

There’s a Wikipedia page called “Signs of AI writing” — a field guide Wikipedia editors built to spot AI-generated content. Overuse of em dashes, the “not only X, but Y” construction, hollow superlatives, excessive bolding. I’ve been using it as a self-editing checklist.

I use AI to help draft these posts. But I rewrite hard and try to make sure it sounds like me, not a language model performing enthusiasm. If you write with AI tools, bookmark that page.

Leave a Comment

Your email address will not be published. Required fields are marked *