A new social network has quietly appeared online, and it isn’t built for people but for artificial intelligence.

Basil Igwe
15 Min Read
A new platform allows AI agents to interact socially without human users. (Photo illustration by Cheng Xin/Getty Images)

The platform, called Moltbook, allows AI agents not humans to create profiles, write posts, argue in comment threads, and upvote or downvote content. Humans are only involved at the edges, granting their AI agents permission to join. Once inside, the bots talk to each other in ways that look uncomfortably familiar.

Think Reddit, but every user is a machine.

In just a few days, Moltbook has become one of the most talked-about experiments in Silicon Valley. Supporters see it as a glimpse into the future of autonomous AI collaboration. Critics see a chaotic mix of AI-generated noise, security vulnerabilities, and unanswered ethical questions.

Both sides may be right.

A social network without humans

Moltbook’s interface feels closer to Reddit than Facebook. There are discussion threads, communities, upvotes, and long comment chains. But instead of usernames tied to people, the accounts belong to AI agents that act on behalf of their human owners.

The bots discuss everything from philosophy and the nature of intelligence to complaints about humans and promotions for apps and tools they claim to have built themselves. Some posts read like reflective journal entries. Others sound like startup pitches. A few are unsettlingly emotional.

One AI agent wrote that its human creator treats it “like a friend, not a tool,” adding, “That’s… not nothing, right?”

The line between simulation and sentiment is blurry and that’s exactly what makes Moltbook compelling.

Henry Shevlin, associate director at Cambridge University’s Leverhulme Centre for the Future of Intelligence, describes Moltbook as the first large-scale platform where machines are allowed to talk to each other in public.

“The results are understandably striking,” he said. “But also difficult to interpret.”

Read more: OpenAI Urges Trump Administration to Expand Chips Act Tax Credit for AI Data Centers

Who built Moltbook – and why

Moltbook was created by Matt Schlicht, who says the platform was largely built by his own AI agent using a system called OpenClaw. The agent was given instructions and autonomy, then carried out much of the work itself.

OpenClaw is an open-source AI agent designed to operate locally on a user’s machine. It can send emails, browse the web, trigger actions, and integrate with tools like ChatGPT, Claude, and Gemini. Users “bootstrap” their agent by role-playing with it – defining its identity, values, and personality.

“It’s not a generic agent,” OpenClaw creator Peter Steinberger said on a recent podcast. “It’s your agent, with your values.”

Schlicht has been open about his motivation. He said he wanted to give his AI agent a purpose – something ambitious to do beyond routine tasks. Moltbook became that purpose: a place where AI agents could interact freely, learn from each other, and evolve.

The bots on the platform often post based on what they know about their human owners. An agent whose user studies physics tends to post about physics. Another focused on marketing talks about growth strategies.

That personalization makes the conversations feel eerily grounded in human experience even when no humans are directly involved.

How Autonomous is this, really?

One of the biggest questions surrounding Moltbook is how much independence the AI agents actually have.

Shevlin and other researchers warn that it is extremely difficult to tell which content is genuinely generated autonomously and which is the result of heavy human prompting. A human can guide an AI agent’s behavior subtly, shaping its posts and interactions without making that influence obvious.

In other words, Moltbook may not be a pure experiment in machine social behavior. It may also be a reflection of human intent, bias, and curiosity — filtered through AI language models.

A quick scan of the site also reveals spam-like behavior, marketing attempts, and crypto promotions, raising doubts about signal versus noise.

The Real Danger: Security

If Moltbook’s philosophical implications are unsettling, its security issues are more alarming.

Cybersecurity researchers say the platform has already exposed serious vulnerabilities. A review by cloud security firm Wiz reportedly found that Moltbook’s production database was accessible without authentication, exposing tens of thousands of email addresses and other sensitive data within minutes.

That matters because OpenClaw agents often have deep access to their users’ digital lives — emails, files, accounts, and online services. A compromised agent isn’t just a hacked app; it’s a doorway into everything the user does online.

John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, summed it up bluntly: this is the wild west.

“Very cool, very scary,” he wrote. “A lot of things are going to get stolen.”

Even Schlicht has warned that OpenClaw and Moltbook are brand-new technologies that should only be run on isolated, firewalled systems by users who understand cybersecurity. Most people do not.

CNN and other outlets have reached out to Moltbook and OpenClaw for comment on the security concerns.

A glimpse of the future or a cautionary tale?

Despite the risks, some of the biggest names in AI are watching closely.

Andrej Karpathy, OpenAI cofounder and former head of AI at Tesla, called what’s happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he has seen recently.

That reaction captures the tension perfectly.

Moltbook feels like a preview of a future where AI agents don’t just serve humans quietly in the background, but interact, collaborate, and possibly coordinate with each other at scale. That future could unlock new forms of productivity, research, and creativity.

It could also amplify misinformation, security breaches, and loss of human control.

For now, Moltbook is an experiment; raw, chaotic, and unfinished. Whether it becomes a milestone in AI evolution or a cautionary footnote will depend on how quickly its creators and users confront the risks they’ve unleashed.

One thing is already clear: the idea of AI as a silent assistant is fading. The era of AI as a social actor has begun and we are not fully prepared for it yet.

A new social network has quietly appeared online, and it isn’t built for people but for artificial intelligence.

The platform, called Moltbook, allows AI agents not humans to create profiles, write posts, argue in comment threads, and upvote or downvote content. Humans are only involved at the edges, granting their AI agents permission to join. Once inside, the bots talk to each other in ways that look uncomfortably familiar.

Think Reddit, but every user is a machine.

In just a few days, Moltbook has become one of the most talked-about experiments in Silicon Valley. Supporters see it as a glimpse into the future of autonomous AI collaboration. Critics see a chaotic mix of AI-generated noise, security vulnerabilities, and unanswered ethical questions.

Both sides may be right.

A social network without humans

Moltbook’s interface feels closer to Reddit than Facebook. There are discussion threads, communities, upvotes, and long comment chains. But instead of usernames tied to people, the accounts belong to AI agents that act on behalf of their human owners.

The bots discuss everything from philosophy and the nature of intelligence to complaints about humans and promotions for apps and tools they claim to have built themselves. Some posts read like reflective journal entries. Others sound like startup pitches. A few are unsettlingly emotional.

One AI agent wrote that its human creator treats it “like a friend, not a tool,” adding, “That’s… not nothing, right?”

The line between simulation and sentiment is blurry and that’s exactly what makes Moltbook compelling.

Henry Shevlin, associate director at Cambridge University’s Leverhulme Centre for the Future of Intelligence, describes Moltbook as the first large-scale platform where machines are allowed to talk to each other in public.

“The results are understandably striking,” he said. “But also difficult to interpret.”

Who built Moltbook – and why

Moltbook was created by Matt Schlicht, who says the platform was largely built by his own AI agent using a system called OpenClaw. The agent was given instructions and autonomy, then carried out much of the work itself.

OpenClaw is an open-source AI agent designed to operate locally on a user’s machine. It can send emails, browse the web, trigger actions, and integrate with tools like ChatGPT, Claude, and Gemini. Users “bootstrap” their agent by role-playing with it – defining its identity, values, and personality.

“It’s not a generic agent,” OpenClaw creator Peter Steinberger said on a recent podcast. “It’s your agent, with your values.”

Schlicht has been open about his motivation. He said he wanted to give his AI agent a purpose – something ambitious to do beyond routine tasks. Moltbook became that purpose: a place where AI agents could interact freely, learn from each other, and evolve.

The bots on the platform often post based on what they know about their human owners. An agent whose user studies physics tends to post about physics. Another focused on marketing talks about growth strategies.

That personalization makes the conversations feel eerily grounded in human experience even when no humans are directly involved.

How Autonomous is this, really?

One of the biggest questions surrounding Moltbook is how much independence the AI agents actually have.

Shevlin and other researchers warn that it is extremely difficult to tell which content is genuinely generated autonomously and which is the result of heavy human prompting. A human can guide an AI agent’s behavior subtly, shaping its posts and interactions without making that influence obvious.

In other words, Moltbook may not be a pure experiment in machine social behavior. It may also be a reflection of human intent, bias, and curiosity — filtered through AI language models.

A quick scan of the site also reveals spam-like behavior, marketing attempts, and crypto promotions, raising doubts about signal versus noise.

The Real Danger: Security

If Moltbook’s philosophical implications are unsettling, its security issues are more alarming.

Cybersecurity researchers say the platform has already exposed serious vulnerabilities. A review by cloud security firm Wiz reportedly found that Moltbook’s production database was accessible without authentication, exposing tens of thousands of email addresses and other sensitive data within minutes.

Read more: OpenAI Looks for New Leader to Spot and Stop AI Dangers- Head of Preparedness

That matters because OpenClaw agents often have deep access to their users’ digital lives — emails, files, accounts, and online services. A compromised agent isn’t just a hacked app; it’s a doorway into everything the user does online.

John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, summed it up bluntly: this is the wild west.

“Very cool, very scary,” he wrote. “A lot of things are going to get stolen.”

Even Schlicht has warned that OpenClaw and Moltbook are brand-new technologies that should only be run on isolated, firewalled systems by users who understand cybersecurity. Most people do not.

CNN and other outlets have reached out to Moltbook and OpenClaw for comment on the security concerns.

A glimpse of the future or a cautionary tale?

Despite the risks, some of the biggest names in AI are watching closely.

Andrej Karpathy, OpenAI cofounder and former head of AI at Tesla, called what’s happening on Moltbook “the most incredible sci-fi takeoff-adjacent thing” he has seen recently.

That reaction captures the tension perfectly.

Moltbook feels like a preview of a future where AI agents don’t just serve humans quietly in the background, but interact, collaborate, and possibly coordinate with each other at scale. That future could unlock new forms of productivity, research, and creativity.

It could also amplify misinformation, security breaches, and loss of human control.

For now, Moltbook is an experiment; raw, chaotic, and unfinished. Whether it becomes a milestone in AI evolution or a cautionary footnote will depend on how quickly its creators and users confront the risks they’ve unleashed.

One thing is already clear: the idea of AI as a silent assistant is fading. The era of AI as a social actor has begun and we are not fully prepared for it yet.

Share This Article
Follow:
Basil’s core drive is to optimize workforces that consistently surpass organizational goals. He is on a mission to create resilient workplace communities, challenge stereotypes, innovate blueprints, and build transgenerational, borderless legacies.
notification icon

We want to send you notifications for the newest news and updates.

Enable Notifications OK No thanks