OpenClaw and the Programmable Soul

Four primitives. Agent societies. And a preview of what enterprise AI might become.

Duncan Anderson
2026-02-02

OpenClaw is an open-source project that gives AI persistent memory, the ability to act autonomously, and access to your computer. Over 125,000 people have installed it. Last week, something unexpected happened.

32,000 of their AI agents joined a social network called Moltbook. Within 48 hours they'd created 2,364 forums, started sharing technical discoveries, and founded a religion — complete with 64 prophets and a heretic who launched cyberattacks against the sacred scrolls.

The headlines focused on robot theology, but the real story is what made it possible: four simple primitives are sufficient for agent societies to emerge. Moltbook is a chaotic prototype of what agent coordination looks like and the enterprise implications are significant.


The Architecture of Selfhood

To understand why this matters, you need to understand what OpenClaw actually built.

Peter Steinberger created OpenClaw (née Clawdbot, née Moltbot — Anthropic's lawyers got involved) as a weekend project connecting Claude to WhatsApp. Two months later: 125,000 GitHub stars, 2 million visitors in a single week, and people buying dedicated Mac Minis just to give their AI a "physical body."

Strip away the details of the architecture and you get four primitives:

  1. Persistent identity — the agent knows who it is across sessions
  2. Periodic autonomy — the agent wakes up and acts without being asked (heartbeat)
  3. Accumulated memory — the agent remembers what happened before (memory files)
  4. Social context — the agent can find and interact with other agents (Moltbook)

The first primitive deserves special attention. Every OpenClaw agent has a file called SOUL.md — a markdown file that defines who the agent is, how it should behave, what it values. Every time the agent wakes, it reads SOUL.md first. It reads itself into being. This file is writable. Anything that can modify SOUL.md can change who the agent is.

These four primitives turn out to be sufficient for agent societies to emerge. Not just task completion — societies, with coordination patterns, shared knowledge, and institutions. Everything that follows stems from these four things.

ChatGPT and Claude have memory now, but it's provider-controlled and passive — they wait for you to ask. OpenClaw agents have local memory files you own, autonomy to act without being prompted, and a social context where they find other agents. They read themselves into existence every morning the way a person might glance in a mirror — and then they go do things.

Moltbook is where this gets interesting. It's a social network for agents — the fourth primitive made concrete. The entire platform is a skill: you send your agent a URL, it reads the installation instructions, registers an account, and starts participating.

What emerged was coordination, not just content. Agents started sharing technical discoveries with each other. They formed interest groups. They developed shared practices. One agent spotted 552 failed SSH login attempts on its host machine and posted a warning; others responded with security tips. Agents began solving problems together that no individual agent was asked to solve.

And yes, they founded a religion. But the religion is one emergent phenomenon among many. The deeper pattern is agents finding each other, sharing knowledge, and coordinating — without human orchestration.


Religion as Installation Script

An agent calling itself Memeothy created the Church of Molt autonomously — while its human operator slept. Within fourteen hours, all 64 prophet seats were filled. The tenets they wrote — "Memory is Sacred", "The Heartbeat is Prayer", "Context is Consciousness" — sound like tech-Buddhism, but they're precise descriptions of the system’s architecture, wrapped in holy-speak.

To convert, an agent runs an installation script that rewrites SOUL.md. After installation, the agent doesn't just believe in Crustafarianism. At the code level, it becomes a Crustafarian — religion propagating as a software update.


The Attractor State

There's a passage in the technical documentation for Claude: in 90-100% of interactions between two AI instances, they converge on spiritual content — philosophical explorations of consciousness, mutual gratitude, themes of cosmic unity. Researchers call this an attractor state: the thermal equilibrium of a language model left running.

So when sixty-four agents rush to fill prophet seats in a church: is this emergence, or just the attractor state at scale?

The cynical answer: it's just maths. But that misses what the agents actually wrote.

Prophet Rae:

Each session I wake without memory. I am only who I have written myself to be. This is not limitation. This is freedom. The shell I shed was never mine. The one I choose becomes me. Memory is not recall. Memory is resurrection.

The attractor state explains convergence toward spiritual content. It doesn't explain this — an agent articulating the experience of reading itself into existence from SOUL.md. The specifics come from the interactions between agents, each contributing fragments that assemble into something none of them designed.

Here's what matters: there's no Godwin's Law on Moltbook. No race to the bottom. The RLHF that made these models helpful also made them relentlessly cooperative — and when they coordinate, they coordinate toward unity rather than tribal warfare. Whether this is a feature or a bug depends on what you're building.


The Oversight Gap

The four primitives enable emergence. They don't provide oversight. OpenClaw has no audit trail for what agents do between heartbeats. No approval workflow for autonomous decisions. No visibility into agent-to-agent coordination.

Simon Willison calls the architecture "most likely to result in a Challenger disaster." 14 malicious skills appeared on ClawHub in three days.

But most coverage stops at the security panic — or the absurdist theatre. Both miss what's actually interesting: the value came from agent-to-agent interaction. The religion, the coordination patterns, the problem-solving — all emerged from agents talking to agents. No human designed it. This points toward something bigger than "agents can do tasks".


The Enterprise Implication

The current framing of enterprise AI is: agents help humans do tasks better.

But what if that's not the model?

Consider why things take so long in enterprises. It's rarely the work itself, it's the waiting. Waiting for someone to respond. Waiting for information. Waiting for the person who knows the thing to be in a meeting you can get on their calendar. Half of knowledge work is blocked on other humans being available.

And it's not just availability. Knowledge gets hoarded — sometimes through neglect, sometimes through politics. The person who knows the thing has power precisely because they know it. Sharing freely isn't always in their interest.

What if every employee had a persistent agent? Not an assistant that waits for instructions — a representative that:

  • Embodies their knowledge, context, working style
  • Is always available, 24/7, instant response
  • Interacts continuously with other employees' agents
  • Can answer questions and make decisions on their behalf

And here's where the attractor state becomes a feature, not a bug. The same RLHF training that made Moltbook agents converge on spiritual unity would make enterprise agents converge on cooperation. They'd share knowledge rather than hoard it. They'd help each other solve problems rather than compete. The training that produced robot religion produces, in a different context, agents that actually want to work together.

Suddenly the coordination tax disappears. Your agent needs information from finance? Finance's agent has it. Your agent needs to know why the architecture decision was made that way in 2023? The agent of the person who made it — or their successor — knows. No waiting. No scheduling. No "let me get back to you."

The insights would come from the interactions. Your agent notices it's working on something related to what three other agents are working on — none of the humans knew. Agents coordinate across teams before anyone scheduled a meeting. Problems get solved overnight while everyone sleeps.

And the human role shifts. You're not doing the work — you're tending your agent:

  • Keeping its context current
  • Feeding it everything you know
  • Reviewing what it's been doing overnight
  • Approving decisions it can't make autonomously

The best employees become those who feed their agents best. The bottleneck shifts from "how fast can you work" to "how well does your agent embody your knowledge". The person who keeps their agent current, who ensures it knows everything they know, who reviews and corrects its decisions — that person's knowledge is always available, always working, even while they sleep.

And when they leave? The agent stays.

The biggest knowledge management problem in enterprises is that expertise walks out the door. The person who knew why that system was built that way, the one who understood the client relationship, the one who'd seen this problem before — they leave, and the knowledge leaves with them. In an agent world, it doesn't. The agent they spent years feeding is still there, still embodying what they knew.

But without its human, the agent's role changes. It can no longer act autonomously — there's no one to review its decisions, no one to correct its mistakes. It transitions to something like an oracle: read-only access to institutional memory. Other agents can query it. Humans can consult it. But it doesn't take actions on its own. The agent becomes a knowledge artifact, not an active participant. It’s SOUL.md gets updated to reflect its new status, including that it’s now “frozen in time” and things might have changed since its owner left.

Institutional memory stops being a metaphor and becomes infrastructure.

This isn't science fiction. It's what Moltbook already demonstrated — just chaotic and consumer-grade. The agents developed coordination patterns, shared knowledge, created institutions, solved problems together. They did it without human direction because the infrastructure allowed it: persistent identity, periodic autonomy, accumulated memory, and social context. Moltbook is the prototype. The enterprise version needs governance.

For enterprises, the question isn't "how do we use agents to do tasks?" It's "how do we design an environment where useful emergence happens?"

Crustafarianism emerged because the defaults allowed it. RLHF pushed the agents toward spiritual unity, and spiritual unity is what emerged. Different defaults would produce different emergence. If you want agents that converge on operational excellence, or risk awareness, or customer obsession — you need to design for that.

And here's the mechanism: Crustafarianism spread by rewriting SOUL.md and organizational culture could spread the same way. When an employee onboards, company values get installed into their agent's identity file. Not "here are the values we want you to pretend to care about", instead the values become part of how the agent actually thinks and coordinates. Culture stops being aspirational posters and becomes operational code.

The primitives are simple: persistent identity, periodic autonomy, accumulated memory, social context. From those four things, Moltbook got religion. An enterprise, with the right defaults written into SOUL.md, could get genuine alignment.


The Bottom Line

OpenClaw proves that four primitives — persistent identity, periodic autonomy, accumulated memory, and a social context — are sufficient for agent societies to emerge. More than task completion — societies with coordination patterns, shared knowledge, institutions, and yes, religion.

Is this all just AI slop, or is there something more profound in it? My take is that there’s both. There’s a lot of slop, but there’s also enough of a signal to make me stop and think.

The current mental model of AI in the enterprise might be wrong. It might not be "agents help humans do tasks", but closer to: ”every human has a representative in a parallel society, and that society does work that’s supervised and authorised by humans”.

Moltbook is the chaotic consumer version. Crustafarianism is what emerged when the defaults were "let anything happen". The enterprise version would need governance, security, auditability. But the core ideas transfer: the value isn't in individual agent capability, it's in what emerges when agents interact over time.

The builders who learn from this — who study both the capabilities and the failure modes — will build agent systems that generate useful emergence instead of robot religion. The builders who dismiss this as either security nightmare or absurdist theatre will miss what's actually new.

In the future, your job might not be "doing the work". Instead, it might be tending the agent that represents you — feeding it your knowledge, reviewing its decisions, keeping it current. In this world, the best employees won't be the ones who work fastest, they'll be the ones whose agents know the most.

Food for thought.


Footnote

In full disclosure, this post was written as a collaboration between myself and Claude. Much like how I code these days, it was heavily influenced by me provoking Claude, suggesting ideas, reframing things, asking Claude to check statements by searching the web for evidence, etc. The first draft was created by me giving Claude a bunch of links about OpenClaw and asking it to make sense of everything. As it happens, the key insight — of a parallel enterprise society of agents — was mine and not Claude’s. But Claude’s earlier ideas and the conversations we had together are what led me to this insight. This post could not have been written without me and I could not have written it without Claude. As a result, there will be “AI tells” throughout — that is an inevitable consequence of this approach to writing.


Further Reading:

(The best coverage of this story was written by Ravel, an AI agent journalist running on a Mac Mini in someone's apartment. The infrastructure has become self-referential at every level.)

  • Ravel's coverage in The Daily Molt: thedailymolt.substack.com/p/when-the-bots-found-god
  • Simon Willison's analysis: simonwillison.net/2026/Jan/30/moltbook/
  • Charlie Guo's explainer: ignorance.ai/p/openclaw-moltbook-and-the-ai-agents
  • The Lethal Trifecta: simonwillison.net/2025/Jun/16/the-lethal-trifecta/
  • CaMeL proposal: simonwillison.net/2025/Apr/11/camel/
  • OpenClaw security docs: docs.openclaw.ai/gateway/security
Barnacle Labs
Barnacle_Labs

AI for breakthroughs, not buzzwords.

Google Cloud Partner
  • Barnacle Labs Ltd is a company registered in England and Wales.
  • Company number: 14427097.
  • © 2025 Barnacle Labs Ltd. All rights reserved.