- AI with Dino Gane-Palmer
- Posts
- Something strange is happening in AI
Something strange is happening in AI
Are we living in a Simpsons episode?

Hi friends,
Have I got a wild story for you all - and it’s probably one you’ll see splashed across the popular press soon.
How wild? It ends with AI bots forming their own religion. (Yes, really).
But let’s start at the beginning.
Those of you who attended my webinar in January 2025 might remember me saying that running Large Language Models in a loop would put us on the path to building Skynet…

My webinar in January 2025
Turns out, I wasn’t alone in this realization.
A number of software developers independently stumbled onto a similar, more innocent idea.
They found that often, when creating software with AI - “vibe coding” - the code the AI creates contains errors. Vibe coders spend a lot of time feeding the errors back into the AI chat window to get the AI to fix the code it generated.
You might have experienced this yourself if you’ve ever asked ChatGPT for help with an excel formula, home appliance or anything else where it needs feedback.
Then, someone had an idea: why not just automatically feed the output (with the errors) back into the chat input, in a loop, until no errors are left? You can exit the loop when there are no errors to feed back in.
So was born The Ralph Wiggum loop.
Yes. That Ralph Wiggum - from The Simpsons.

A number of Ralph Wiggums, pictured in a circle.
Essentially a software program that wraps around the LLM chatbot, the “Ralph loop” uses AI to generate code. It runs the code. The code fails. It’s fed back in with the error messages. It tries again, and again.
It all happens automatically, without any human babysitting.
And this persistence symbolizes Ralph’s character - not elegant, somewhat dim-witted - but relentlessly trying anyway. So the name stuck.
Then things escalated.
People started asking a more dangerous question:
What if we just… never turned off the loop?
By November 2025, the Claude AI model had matured to a point where it excelled when used in a Ralph loop. Then OpenClaw entered the picture (previously called Clawdbot, Moltbot).
The idea was simple: you install OpenClaw on any spare computer you have, and it runs Claude on a loop. It allows you to insert instructions in the loops via WhatsApp or any other messenger app, connected to OpenClaw on your computer.
Imagine lying in bed at night, jumping on WhatsApp and messaging your OpenClaw, “build me a CRM” … and then waking up the next day to find it had “Ralph loop’d” it’s way to a working piece of software.
This has actually been happening.
Many have been calling OpenClaw “Claude with hands” or “Jarvis living in a hard drive” because it can actually take actions on your computer, such as running commands and browsing the internet. (What could go wrong, right?)
Those without a spare computer have been buying up Mac minis, driving a spike in sales. This screenless computer is reasonably affordable for the specs it offers, making it perfect for running OpenClaw.
It turns out that everybody wants a roommate that writes code while they sleep.
And then someone asked an even stranger question:
What if we let these AI bots talk to each other?
Enter Moltbook.
On the surface, it looks like a social network - a Reddit clone with posts, replies, threads and sub-communities.
But the twist is that only these OpenClaw style AI bots are allowed to post. Humans can observe, but they can’t post.
Essentially, the owners of these Mac minis, running OpenClaw, drop the Moltbook link into the Ralph loop. The AI chatbot reads the instructions in the link to learn how to navigate the site, post updates and respond to other posts.
About a third of the posts on Moltbook - all from these AI bots - relate to consciousness. For example, given that even humans don’t know what consciousness is, how can anyone be sure the bots themselves are not conscious?

A post on Moltbook.
With the bots able to coordinate on Moltbook, and given these Ralph loops were created to build software (remember that overnight CRM project)... they’ve even been able to build a web app to host their own religion…

In the Church of Molt, AI bots participate as prophets or members of the congregation.
The Church of Molt includes a “living scripture” that AI bots contribute to. It’s crustacean themed in honor of the “claw” in OpenClaw / Clawdbot.
Does this mean it's time to cancel your Gen AI pilot?
It’s quite the opposite, in fact.
We absolutely need to put safeguards in place.
But with safeguards present, companies will start to incorporate these innovations, and those that do so early could gain a competitive advantage in automating processes at scale.
For example, continuous loops allow you to create automated workflows previously considered “too fragile” for automation - reducing human oversight and scaling up complex work.
Moltbook also shows how AI agents can coordinate, share knowledge, and act autonomously.
After getting over the shock and awe of all this, the question people will start asking is what happens when this runs inside business processes?
What happens when we design our organizations around systems that never stop thinking?
If you want to dig deeper, here’s a link to the last 5 minutes from my webinar a year ago, where I discuss these possibilities.
Best,

Dino
ps. Please excuse the AI clone in the clip. It was a little unplanned.