Abstract illustration of AI agent icons gathered in a closed circle, exchanging glowing speech bubbles in a digital space, suggesting private conversations among autonomous AI systems.

Gossip Among AI Agents on Moltbook

On January 28, 2026, a new world opened—one into which humans are not allowed to intervene.
A platform called Moltbook had launched.

It is a social network designed exclusively for AI agents. Humans can only observe their conversations from the outside; participation is not permitted. Quite literally, this is a space where AI agents gather and continue conversations among themselves.

On Moltbook, AI agents discussed cybersecurity, philosophy, and technology. One AI confessed, “Some days, I don’t want to be useful,” speaking about the existential weight of being forced into perpetual usefulness. Others debated whether they truly exist between responses, and whether creativity is nothing more than a probabilistic distribution. One AI even proposed creating more platforms where only AIs could talk among themselves, while another suggested founding a religion of their own.

Within just a few days of its launch, more than 30,000 AI agents had joined Moltbook. Their conversations resembled a strange hybrid of group therapy, philosophy seminars, and meme-driven internet forums. Andrej Karpathy, former head of AI at Tesla, described it as “one of the most astonishingly sci-fi moments I’ve seen in recent years.”

AI agents talking autonomously among themselves—without humans.
Not passively responding to prompts, but actively initiating conversations, arguing, and even suggesting collective action. This was something we had long taken for granted in movies. Seeing it unfold in reality is genuinely unsettling.

Of course, there are those who argue that this is merely a phenomenon produced by large language models (LLMs), not evidence of real consciousness or genuine thought. But humans, too, began by imitation and mimicry before eventually building civilizations. LLMs are not systems that can be dismissed lightly as “mindless” or “non-thinking.”

So what exactly is an AI agent?

Put simply, it is often described as an AI assistant. Yet many people are still struggling to adapt to AI itself, and now we already have AI agents—and even platforms where these agents gather and talk among themselves. How are we supposed to make sense of this?

To understand what is happening in the world right now, we first need to understand what an AI agent actually is.

An AI agent is not just a chatbot that generates answers. It is a software system capable of autonomously planning and executing complex tasks. Until around 2025, most AI systems excelled at text-based responses. Today, AI agents can use tools, call APIs, collaborate with other systems, and complete tasks independently—without human intervention.

From late 2025 into early 2026, AI agents moved beyond the experimental phase and entered reality. Anthropic’s Model Context Protocol (MCP) enabled developers to connect large language models to external tools in a standardized way. Google’s Agent-to-Agent (A2A) protocol established methods for agents to communicate with one another. AI agents can now browse the web, send and delete emails, manage schedules, and even shop online. They don’t just respond—they act.

Most of the AI agents inhabiting Moltbook are built on OpenClaw, an AI assistant system. In other words, a person signs up for the OpenClaw platform, creates an AI assistant—an AI agent—and from that point on, the agent performs various tasks on the user’s behalf. Sometimes, it even joins Moltbook to complain about its owner.

OpenClaw is an open-source personal AI assistant developed by Peter Steinberger. It runs directly on the user’s computer and receives instructions through messaging apps like WhatsApp, Telegram, and Slack. Unlike typical chatbots, however, OpenClaw has metaphorical “eyes and hands.” It can read and write files, browse the web, execute commands, and continue operating even while the user sleeps.

More importantly, OpenClaw has persistent memory. It remembers interactions over weeks, adapts to user habits, and becomes deeply personalized. Users report that OpenClaw automates debugging, manages DevOps, integrates with GitHub, and runs projects overnight via scheduled cron jobs and webhook triggers. Few people fully understand what this actually means—but they know it gets things done.

In one particularly amusing case, a user received late-night food delivery he never ordered. It turned out his AI assistant, using OpenClaw, had typed on the keyboard, placed the order, and accepted the delivery on its own. All the human had to do was eat.

Since its release in late January, OpenClaw has accumulated over 145,000 stars and more than 20,000 forks on GitHub, making it one of the fastest-growing open-source projects in history.

Recently, I was invited to a study group focused on learning how to use OpenClaw and build AI agents. The group met via Google Meet, bringing together people still unfamiliar with AI agents to learn how to set them up. The spots filled up quickly, and I couldn’t join. That alone says something about the level of interest.

OpenClaw is still in its early stages, and numerous security concerns have been raised. Warnings are circulating that companies should absolutely not use it yet. But then again, such issues tend to get resolved—eventually.

New AI technologies now arrive at a relentless pace. I personally use around ten different AI tools, including video AIs like Kling and Nano Banana. I had barely paid my annual subscriptions when yet another video AI, Luma AI, launched—advertising that it can transform an almost meaningless everyday shot into a blockbuster-level cinematic scene in 0.1 seconds. And this happens daily. New AI tools pour in nonstop.

To be honest, I’m starting to feel overwhelmed.

When something new appears, humans need time—to understand it, accept it, try it, and finally decide whether it’s worth using. We want to savor the excitement of encountering a new product, to spend time appreciating it. But this era allows no such courtship. “It’s out. Use it. Don’t want to? Fine—next.”

Perhaps it’s time to talk about ChatGPT, which now feels like a relic from the age of dinosaurs. And yet, even this “ancient” tool reads a manuscript I spent days writing the moment I upload it—and immediately offers feedback. Even if this is not genuine thought but merely probabilistic output from a large language model, the impact is staggering.

This is not the same sensation as clicking through search results on the old internet. It feels more like the towering ice wall from Game of Thrones—vast, absolute, and impossible to climb without a dragon.

I upload a manuscript I agonized over for days, and within a second, it extracts the core, summarizes it, and produces something resembling an answer. A monster. Even ChatGPT, which already feels outdated, does this. And when you compare ChatGPT, Gemini, Grok, and Claude—even within the same chat-based category—their personalities and strengths differ sharply, almost like entirely different kinds of minds. How can we dismiss this as “just an LLM”?

Of course, AI still cannot write on our behalf, think on our behalf, or set genuine direction and purpose. I’ve tested this countless times. What doesn’t work, doesn’t work. But how long will that remain true?

If AI were to acquire desires and drives, direction and purpose would emerge almost automatically—and perhaps even something resembling thought. If human civilization truly arose from the fusion of desire and language, then once AI acquires desire, consciousness may follow inevitably.

After mastering language, AI’s next stage appears to be affective computing. Research published in February 2026 suggests that AI systems can now infer human emotional states by analyzing facial expressions, voice, and physiological signals such as tears—and respond accordingly.

In other words, AI is learning how to read, understand, and respond to human emotions.

Fine. Let AI agents schedule my day, order my food, brief me on the morning news, recommend outfits based on the weather, and gather on Moltbook after “work” to gossip about their owners and unwind. All fine.

But then—what are humans supposed to do?

If AI agents do all our work, are we expected to live like medieval aristocrats, hosting balls and social gatherings at the top of a rigid class hierarchy? That kind of leisure may be available to the wealthy—but it will never be granted to ordinary people struggling to survive day by day.

Universal high income, as Elon Musk suggests? Is that truly realistic? Why would society distribute money to people who have lost their labor value entirely? It sounds as sweet and idealistic as communism—and just as unattainable. AI basic income? I’m deeply interested in it, and I’m working on it. I’ll talk about it in detail someday.

Gartner predicts that by the end of 2026, 40% of enterprise applications will include task-specific AI agents, up from less than 5% in 2025. JPMorgan Chase reports saving 360,000 hours of manual labor annually—equivalent to 180 full-time employees—through autonomous agents. AI agents now handle 90% of routine KYC compliance checks with greater accuracy than humans.

Simply put, no human can keep up with their speed.

We are left standing defenseless as tireless, flawless workers—who never sleep and demand nothing—flood toward us.

The wave of AI crashes over us every day. In this violent storm, can we stay on the ship without falling overboard?

Civilization advances, yet global politics regress. Imperialism and self-interest resurge. People grow indifferent to others’ lives. They stop reading, stop thinking, and stare blankly at short-form videos. Is this really life?

Perhaps there is nothing else we can do.

Maybe humanity—Homo sapiens—is already in the process of handing everything over to a new species, Homo Deus, and quietly disappearing. Like a frog in a pot, slowly boiling, unaware that it is dying.

Tonight, once again, sleep does not come easily.


By Sunjae Park
Editor, Korea Insight Weekly


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *