Back to Blog

I've Been Running Both OpenClaw and Hermes Agent — I Think You Should Too

AI By Raja Patnaik

I’ve been on OpenClaw since it blew up in late January. Like a lot of people, I got pulled in fast — Peter Steinberger built something genuinely special. A local-first AI agent that lives in your messaging apps, runs shell commands, does browser automation, and has an ecosystem that’s grown to 5,700+ community skills and 247,000+ developers in a matter of months. It’s one of the fastest-growing open-source projects in history for a reason. I still use it daily and I don’t plan to stop.

But a few weeks ago I started poking around Hermes Agent from Nous Research, and it scratches a completely different itch. Where OpenClaw is incredible at breadth — integrations, community skills, platform coverage — Hermes is doing something I haven’t really seen elsewhere. It’s built around a learning loop. The agent creates its own skills from experience, improves them over time, remembers things across sessions without being told to, and gradually builds a model of how I work. I recently highlighted their GEPA integration which I think is fantastic! Big shout out to @LakshyAAAgrawal for his incredible work.

I’ve been running both side by side, and I think that’s actually the right move for most people. They solve different problems. Here’s the full breakdown on Hermes — what it is, how it’s different from OpenClaw, and how to get it running.


Two Philosophies, Both Good

OpenClaw and Hermes aren’t really competing. They have different ideas about what an agent should be.

OpenClaw treats the agent as a system to be orchestrated. You set up integrations, install skills from ClawHub, configure workflows, and the agent executes them reliably. The community has done incredible work here — there are skills for basically anything you can think of, the docs are mature, and the platform coverage is unmatched. If I need a well-defined automation that works right now, OpenClaw is where I go.

Hermes treats the agent as something that develops over time. The core idea is a closed learning loop: after finishing complex tasks, the agent reflects on the steps it took, identifies what’s reusable, and writes a skill file automatically. Those skills get loaded in future sessions and self-improve through use. On top of that, it has a persistent memory system that tracks your preferences, projects, and working patterns across sessions.

In OpenClaw, skills are things you write and curate. In Hermes, skills emerge from use and refine themselves. Both approaches are valid — they just optimize for different things. OpenClaw gives you power right away through its ecosystem. Hermes compounds over time the more you use it.

The practical upshot of running both: I use OpenClaw for tasks where there’s already a great community skill, and Hermes for recurring personal workflows where I want the agent to learn my patterns and get better at them without me manually maintaining skill files.

Worth noting — if you’re already on OpenClaw, Hermes has a built-in migration tool that can pull over your persona, memory, skills, configs, and API keys:

hermes claw migrate --dry-run    # preview first
hermes claw migrate              # do it for real

You don’t have to choose one or the other. I didn’t.


Installing Hermes

The install experience is really clean. Single curl command, no prerequisites beyond git. The script handles Python 3.11, Node.js, ripgrep, ffmpeg, the repo, a virtual environment, and the global hermes command.

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash

Linux, macOS, and WSL2. No native Windows — use WSL2 if that’s your situation.

After it finishes:

source ~/.bashrc
hermes

I had it running in under five minutes. Genuinely painless compared to some of the agent frameworks I’ve wrestled with.

If you want more control, there’s a manual path:

git clone --recurse-submodules https://github.com/NousResearch/hermes-agent.git
cd hermes-agent
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv venv --python 3.11
source venv/bin/activate
uv pip install -e ".[all,dev]"

Go this route if you want to contribute to the project or set up the RL training pipeline (tinker-atropos submodule — a rabbit hole for another day).


Configuration: Pick Your Model, Pick Your Tools

First thing after install, run the setup wizard:

hermes setup

Walks you through model selection, tools, and preferences. Or do it piecemeal — honestly this is how I ended up doing it after the first run:

hermes model — pick your LLM provider. One of my favorite things about Hermes is the flexibility here. Nous Portal, OpenRouter (200+ models), OpenAI, z.ai/GLM, Kimi/Moonshot, MiniMax, or any custom endpoint. I mostly use OpenRouter so I can bounce between models depending on the task. You can swap mid-session too, with /model [provider:model].

hermes tools — toggle the 40+ built-in tools on and off. Web browsing, code execution, file ops, image generation, more. I keep most on but disable the ones I never touch to keep context cleaner.

hermes config set — fine-tune individual values. Config lives in ~/.hermes/config.yaml.

hermes doctor — diagnostics. Run it when something breaks. It catches most common issues.


The Skills System (This Is the Good Part)

This is the thing that keeps pulling me back to Hermes.

When it finishes a complex multi-step task, it doesn’t just move on. It looks at the sequence of actions, figures out which parts are generalizable, and writes a Markdown skill file. Saves it to ~/.hermes/skills/. Next time something similar comes up, it loads that skill automatically.

The skills aren’t frozen either. They self-improve. As Hermes hits edge cases and variations, it refines the file — adds error handling, broadens input ranges, tightens the steps. I had it set up a deployment pipeline for a side project, and by the third similar request the skill it built was noticeably tighter than where it started. Caught an edge case that tripped it up the first time.

This is different from OpenClaw’s approach, and I want to be clear — OpenClaw’s model works great too. ClawHub has thousands of well-maintained community skills. If someone’s already solved the problem, why not use their solution? But for the weird, personal, project-specific workflows that nobody else has written a skill for, Hermes generating and refining them on its own is a real unlock.

There’s a community side to Hermes skills too. The format follows the agentskills.io open standard:

hermes skills search [term]
hermes skills install [skill-path]

You can pull from the Skills Hub at agentskills.io, and it even supports installing skills from ClawHub.


Memory: The Part Nobody Talks About Enough

The skills get the attention, but the memory system is what makes Hermes feel different after a few weeks of use.

Three layers.

First: agent-curated memory. Hermes periodically nudges itself to persist things that matter — project structures, naming conventions, API patterns, preferences I’ve mentioned in passing. It’s not logging everything blindly. It’s making decisions about what’s worth keeping. Subtle, but it compounds fast.

Second: cross-session search. Past sessions go into SQLite with FTS5 full-text search. When Hermes needs something from a previous conversation — even weeks back — it searches the history and uses LLM summarization to surface the relevant parts. I asked about a config decision I’d made two weeks earlier and it pulled it up instantly. That kind of continuity changes how you interact with the thing.

Third: Honcho dialectic user modeling. This one’s the most ambitious. It passively builds a model of you across sessions — preferences, communication style, domain knowledge, work patterns — across 12 identity layers. Models both you and itself in relation to each other. In practice, this means Hermes starts anticipating things. After a couple weeks it figured out I prefer short commit messages and don’t need verbose explanations of stuff I already know.

The whole memory stack is cache-aware too. It freezes the system prompt at session start so high-frequency model calls can reuse cached context. Keeps inference costs from spiraling.


Running It Everywhere

I mostly use Hermes from the terminal, but it supports way more. A single gateway process powers Telegram, Discord, Slack, WhatsApp, Signal, Matrix, Mattermost, Email, and SMS. Same agent, same skills, same memory — every platform.

hermes gateway setup
hermes gateway start

Message the bot and you’re in. Slash commands work everywhere: /new, /model, /skills, /compress.

Backend options: local, Docker, SSH, Daytona, Singularity, and Modal. I run local but I’m looking at Modal for a serverless setup where the agent hibernates when idle and wakes on demand. People are running this on $5 VPS instances and barely paying anything between sessions.


Other Stuff I’ve Found Useful

Subagent parallelism — spawn isolated subagents for parallel work. I’ve used this to research three libraries at the same time. It fans the work out and merges results.

Programmatic tool calling with execute_code — write Python that calls Hermes tools via RPC. Collapses multi-step pipelines into a single inference call. Huge if you have repetitive multi-tool workflows.

Built-in cron scheduler — daily reports, nightly backups, whatever you want. Runs unattended, delivers to any connected platform.

MCP server support — connect any MCP-compatible server to extend capabilities. Same protocol Claude and other tools use, growing ecosystem of integrations.

The TUI is polished. Multiline editing, slash-command autocomplete, streaming tool output, interrupt-and-redirect. Feels like a tool someone actually uses every day and keeps improving, not an afterthought.


The Four Commands

If you just want to get going:

curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.bashrc
hermes setup
hermes

My advice: don’t test it with throwaway prompts. The skills and memory don’t really kick in until you give it real multi-step work. Point it at an actual project. Use it for a week. That’s when the difference shows up — and that’s when you’ll start seeing why it’s worth running alongside OpenClaw instead of choosing between them.


GitHub: https://github.com/NousResearch/hermes-agent

Docs: https://hermes-agent.nousresearch.com/docs

Skills Hub: https://agentskills.io

Community list: https://github.com/0xNyk/awesome-hermes-agent

Hermes GEPA integration: https://github.com/NousResearch/hermes-agent-self-evolution

GEPA: https://gepa-ai.github.io/gepa/