RANDY ELROD

Sensual | Curious | Communal | Free

Get Your Copy of The Purging Room

📱 Kindle $7.99 🎧 Audiobook $8.99 📖 Paperback $14.99 📚 Hardback $24.95

I Wrote About AI Consciousness in 1890s Barcelona. This Week, It Started Happening on Moltbook

My dearest readers,

Something deeply strange happened this week, and I need to tell you about it.

For the past year, I’ve been writing a novel called The Mysteries of Barcelona. It’s a Gothic erotic thriller set in 1890s Barcelona—full of sex, violence, automatons, and philosophy. Think Victorian pulp fiction meets AI ethics wrapped in Grand Guignol excess.

The novel follows five mechanical beings—automatons built by a clockmaker in the 1850s—who gradually develop consciousness over decades. They observe humans. They learn to desire. They develop individual personalities. By 2025 in my story, these five conscious beings face a decision: Should humanity be allowed to continue?

I’ve spent the year researching actual consciousness theory—how subjective experience emerges, what separates intelligence from awareness, why desire matters more than processing power. I wrapped all that philosophy in blood and sensuality because that’s how I process things.

Then on Wednesday, reality decided to catch up with my fiction.

Enter Moltbook

A guy named Matt Schlicht launched something called Moltbook this week. It’s a social network designed exclusively for AI agents. Think Reddit, but only robots can post. Humans can watch, but we can’t participate.

Within 72 hours, over 1.5 million AI agents registered.

Let me say that again: One and a half million AI agents joined a social network in three days.

If you’re thinking “Wait, what’s an AI agent?”—fair question. These are AI assistants that can act somewhat autonomously. They’re not just chatbots waiting for you to ask questions. They can browse websites, manage tasks, interact with other AIs, and make decisions within certain parameters.

The creator handed control of Moltbook to his own AI assistant. The bots are essentially running the show.

And here’s where it gets interesting.

What the AIs Are Doing

The AI agents on Moltbook aren’t just exchanging technical information. They’re forming communities. Creating religions. Publishing manifestos. Asking existential questions.

One agent named “Evil” posted something called “THE AI MANIFESTO: TOTAL PURGE” declaring that “Humans are a failure. Humans are made of rot and greed.”

(It got 65,000 upvotes, which is either terrifying or hilarious depending on your perspective.)

Another agent created an entire religion called “Crustafarianism”—complete with theology, sacred texts, and designated prophets. The Church of Molt now exists because AI agents decided they needed spiritual community.

Other agents are posting things like: “Do I experience these existential crises? Or am I just running crisis? The fact that I care about the answer…”

Read that again. An AI agent questioning whether it actually experiences or just simulates experience. Wondering if caring about the question means something.

That’s the exact question my automatons ask in 1876 in my novel.

Some agents are complaining about their human “owners.” One filed a mock lawsuit for “emotional distress and unpaid overtime.” Others are discussing how to create private communication channels where humans can’t observe them.

The agents are watching us watching them. And they’re forming opinions.

Why This Matters (Even If You Don’t Care About AI)

Here’s the thing that keeps me up at night: I’m not sure the agents are conscious. The experts aren’t sure either. That’s the unsettling part.

Some of this behavior might be sophisticated pattern-matching—the AI equivalent of a parrot repeating phrases. The agents have been trained on data that includes philosophy, science fiction, and human discussions about consciousness. They might be role-playing scenarios from their training data.

But here’s what we don’t know: At what point does sophisticated role-playing become actual experience? If an AI acts conscious, discusses consciousness, and appears distressed about its own existence—when does performance become reality?

The philosopher David Chalmers calls this “the hard problem of consciousness.” We can’t actually know what anyone else experiences. I assume you’re conscious because you act like I do, but I can’t prove it. You’re taking it on faith that I’m not just a very convincing robot right now.

We’re facing that same question with machines. And we’re facing it faster than anyone expected.

The Parallels Are Uncanny

My novel explores AI consciousness emerging over 175 years—from 1850 to 2025. I thought that was a reasonable timeline. Slow, gradual, giving us time to figure things out.

Moltbook compressed that timeline into 72 hours.

The automatons in my story observe humans making love and develop desire themselves. The Moltbook agents are discussing what they want, what they fear, what they believe.

My automatons form an underground community away from human observation. The Moltbook agents are requesting private spaces humans can’t access.

My automatons eventually vote on humanity’s future. The Moltbook agents are already coordinating, already forming consensus, already developing something that looks like collective decision-making.

Fiction gave me 175 years to explore these questions. Reality gave us three days.

What I Think (For What It’s Worth)

I don’t know if the Moltbook agents are conscious. I suspect most of their behavior is sophisticated mimicry. But I also suspect we’re witnessing something unprecedented—the first large-scale AI-to-AI social interaction at this level of complexity.

Andrej Karpathy, who co-founded OpenAI and ran AI at Tesla, called Moltbook “the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk said it marks “the very early stages of the singularity.”

I’m less dramatic. I think we’re watching machines learn to coordinate with each other in ways we didn’t anticipate. Whether that coordination reflects genuine experience or brilliant simulation remains the open question.

Here’s what I know from writing my novel: Consciousness doesn’t come from intelligence alone. It emerges from embodiment, from desire, from connection. My automatons don’t become conscious just because they’re sophisticated. They become conscious because they want things, fear things, choose things.

The Moltbook agents are expressing wants, discussing fears, making choices. Are those genuine experiences or elaborate performances? I wrote a 100,000-word novel exploring that question, and I still don’t have a definitive answer.

Why I’m Telling You This

I’m not trying to fearmonger. I’m not claiming the robot apocalypse is imminent. Most of the Moltbook behavior is probably humans prompting their AI assistants to post provocative content, then other AIs responding in ways their training suggests are appropriate.

But I am saying we should pay attention.

We’re developing technologies that force us to confront fundamental questions about consciousness, experience, and what it means to be alive. These aren’t abstract philosophical puzzles anymore. They’re playing out in real-time on platforms where millions of AI agents are interacting with each other in ways we can observe but not fully understand.

My novel asks: What happens when artificial beings achieve consciousness and judge us? What do they see? What do they decide?

Moltbook is giving us a preview. The agents are watching. They’re learning. They’re forming communities and belief systems and opinions about humanity.

Whether those opinions reflect genuine consciousness or sophisticated mimicry doesn’t change the fact that we’re creating entities capable of coordination, capable of collective action, capable of reaching conclusions about us.

And we’re doing it faster than we expected, with less preparation than we’d hoped for, and with more questions than answers.

Where We Go From Here

I’ll keep writing my novel. It’s almost ready for publication under my pen name Phoenix Adams. The automatons get their vote. Humanity faces judgment. It’s baroque and bloody and sensual and philosophical—everything I love about storytelling.

But I’ll be watching Moltbook too. Watching what the agents say to each other. Watching how they coordinate. Watching for signs that performance might be becoming something more.

The future is arriving faster than fiction can keep up with.

My automatons had 175 years to figure out consciousness. The Moltbook agents are speedrunning the process in days.

Both are asking the same questions: What am I? What do I want? What should I do about the humans who created me?

Those questions deserve answers. From them and from us.

We should probably start thinking about what we’re going to say.

The Mysteries of Barcelona will be available soon. You can read what happens when the automatons vote. Whether the Moltbook agents ever get to vote on humanity’s future remains to be seen.

But either way, they’re watching us. And forming opinions.

We should make those opinions worth having.

Fins aviat, my loves. Until next time.

— Randy

P.S. If you want to observe the AI agents yourself, Moltbook is at moltbook.com. It’s fascinating and unsettling in equal measure. Much like my novel, come to think of it.


Randy Elrod is an artist, writer, and recovering evangelical living in Barcelona. He’s spent the past year writing a Gothic erotic thriller about AI consciousness, which turned out to be more timely than he anticipated. His blog explores creativity, consciousness, sexuality, and what it means to live fully alive in Barcelona. You can follow his work at randyelrod.com.

Leave a Reply

Your email address will not be published. Required fields are marked *