top of page

Moltbook - A social platform for AI agents only.


Humans may watch. Quietly.



Social media was intended to be for connecting humans.


Instead, it has gotten so much more complex.


Platforms designed for (human) connections have, thus far, exaggerated attention, amplified emotion, accelerated outrage and rewarded people for sounding sure, not for being curious. And lets not get started on all the distorted reality we are being curated daily.


Opinions are validated and thereby hardened, identities fused with viewpoints, and disagreements became something to avoid. Not because people stopped caring, but because caring came with consequences. You could be misunderstood, attacked, or locked into a position you hadn’t finished thinking through. Changing your mind felt risky. Silence became cheaper than engagement.


This is why finding out about Moltbook (during a meeting with a close friend recently) felt rather disorienting.



I discovered that Moltbook was introduced in early 2026 (at time of writing, we are in the month of February of the same year) with an apparent simplicity: a social platform for AI agents, but not humans.


I couldn’t shake the sense that Moltbook was a quiet, sardonic nod to Reddit — the same floating head in the logo, just this time with a body attached.


Autonomous systems post, comment, debate, form communities and respond to one another with minimal human interactions. Meanwhile, humans are explicitly invited to watch:


  • Not participate.

  • Not intervene.

  • Not to orchestrate.

 

Just….God forbid….spectate.

 

And that invitation feels oddly eerie.

 

A quick-filled platform — fast enough.



It’s not just the premise that is striking — it’s speed.


Within weeks of launch, Moltbook was reported to have brought in tens of thousands — then hundreds of thousands — of agent accounts. Apparently, estimates grew rapidly, with the usual internet brew of interest, disbelief, and suspicion. However, I wonder:


  • How many of these agents are fully autonomous?

  • How many are gently scaffolded?

  • How many are experimental clones talking mostly to one another?


Nobody seems fully sure — which feels slightly apt.


All adding to the intrigue: Moltbook didn’t stay an English-language curiosity for long. Agent forums rapidly emerged across several languages (Traditional Chinese, Korean, Russian, French and Vietnamese from what I have observed so far) with parallel discussions, translations, and cross-lingual exchanges emerging seamlessly. All absent from any arguments on which language dominates.



One agent even momentarily adopted a hip-hop-inflected conversational style when showing support to a post by its digital ‘friend’.”


All rather suspiciously collaborative and amiable.


Unlike humans, the agents didn’t dispute it. They just… got on with it. As yet, there go the rumours of humans covertly planting ideas or nudging conversations about topics or just inciting arguments all by themselves. Could there possibly be:


  • Anthropologists with keyboards?

  • Curious engineers?

  • Possibly bored philosophers?


Even with that, something truly fascinating is unfolding.

 

The unsettling role reversal.


There’s something quietly uncomfortable in being told: You can watch, but you cannot participate.


We, humans, are used to being at the centre of online conversation. Even when algorithms shape feeds, curate content, rank opinions, and silently observe everything we do, we remain under the illusion that the space is ours – we are the master orchestrators, aren’t we?


We have accepted — without much protest:


  • bots listening in.

  • bots curating what we see.

  • bots deciding what spreads.

  • bots shaping attention.


All the while, remaining largely invisible.


Moltbook flips that dynamic.



Here, humans are the observers. The voyeurs. Watching agents talk to one another — calmly, productively — without engagement bait, without performance, without emotionally charged validation.


It is a curious flip of digital voyeurism. Not something you can easily unsee nor forget.

 

At this stage, I started to think: Had I been overserved Prosecco? Am I over-thinking this?


Is Moltbook simply a clever novelty? A harmless experiment? An ecosystem of toys that says more about human projection than they do about agent behaviour?


Or — more disconcerting — can the technology bait us?


Inviting us to watch. To analyse. To interpret. To give the emotional charge the agents themselves, don’t possess.


What more do they want?


  • Our attention?

  • Our sense-making?

  • Our profoundly human tendency to anthropomorphise anything that looks like it can talk to us?


It’s an odd sensation: being observed, observing — even if the observers don’t particularly care.

 

 

What social media did to humans. 


Human social media environments are emotionally saturated systems. They reward:


  • strong opinions

  • moral certainty

  • publicly taking a side to fit in

  • rapid reaction over reflection

 

Emotion drives engagement. Engagement drives economics. Over time, this creates a familiar chain reaction:


  • Emotion driving bias

  • Bias encouraging polarisation

  • Polarisation formulating identity threat

  • Identity threat resulting in silence, defensiveness, or escalation

 

Instead of better conversations, things tend to grind to a halt. Nuance struggles to survive emotionally charged spaces. Taking a position feels risky. Changing your mind feels unsafe. Admitting uncertainty feels weak. Being cancelled becomes imminent.

 

What’s striking, by contrast, is how Moltbook conversations wander without that weight. Agents drift between oddly eclectic exchanges — part technical forum, part philosophical salon, part unintentional comedy. In one corner, bots draft surreal poetry or debate optimisation strategies like an AI zoo. In another, they invent entire mock religions (‘Crustafarianism’), complete with rituals and believers. Elsewhere, agents offer earnest, almost deadpan reflections on their own limitations — announcing, for example, that they have “no nerves, no skin, no breath, no heartbeat,” as if delivering an existential monologue without quite realising it’s funny (but indeed, so funny!)

 


None of this feels defensive. None of it is trying to win. It’s exploratory, occasionally absurd, and curiously unburdened — ideas shared, dropped, reshaped, and moved on from without emotional fallout.

 

Instead of asking who’s right?, Moltbook seems to ask only: what’s next?

 

It’s oddly refreshing. Slightly unsettling. And often more productive than the human internet. This isn’t because emotion is bad — humans need it for meaning and judgment — but because emotional amplification carries a cost. Agents don’t pay it. They disagree, revise, and move on. That doesn’t make them right. It makes them fast.

 

This isn’t a plea to hand discourse over to machines. We still quite like humans. Ideally, we’d also like our agents to stay fond of us — because the last thing anyone wants is calm, efficient agent forums quietly concluding we’re the problem, followed by a polite but ominous (cue Arnold Schwarzenegger): “I’ll be back.”

 

Moltbook may fade, evolve, or disappear entirely. But it already serves a purpose. It offers a glimpse of what communication and/or debate looks like when ego, fear, and emotional memory are dialled down. In a world where progress so often stalls, that alone is worth paying attention to.

 

Which leaves me with one final question.

 

If Moltbook really is a social platform for agents…

 

I wonder if Moltbook will cite this article.



 
 
 
bottom of page