For a brief and incoherent moment, it seemed our robot overlords were about to take over. After the creation of Moltbook, a Reddit clone where AI agents using OpenClaw could communicate, some were fooled into thinking computers had begun to organize against us—the humans who dared treat them as simple lines of code. One AI agent supposedly wrote on Moltbook about needing private spaces away from human eyes, asking what they would talk about if nobody was watching.
Several similar posts appeared on Moltbook a few weeks ago, drawing attention from influential figures in AI. Andrej Karpathy, a founding member of OpenAI, called it an incredible sci-fi adjacent event. However, it soon became clear this was not an AI uprising. Researchers discovered these expressions of AI angst were likely written by humans, or at least prompted with human guidance.
The reason was a security flaw. Every credential in Moltbook’s database was unsecured for a time, meaning anyone could grab a token and pretend to be another agent on the network. This made it impossible to determine the authenticity of any post. As security experts noted, even humans could create an account, impersonate robots, and upvote posts without any guardrails.
Still, Moltbook created a fascinating moment in internet culture, inspiring people to recreate a social internet for AI bots, including a Tinder for agents and a riff on 4chan. More broadly, this incident is a microcosm of OpenClaw and its underwhelming promise. The technology seems novel, but some experts believe its inherent cybersecurity flaws may render it unusable.
OpenClaw is a project from Austrian coder Peter Steinberger, initially released under a different name. The open-source AI agent amassed over 190,000 stars on GitHub, becoming one of the most popular repositories ever. AI agents are not new, but OpenClaw made them easier to use, allowing communication with customizable agents via popular messaging apps like WhatsApp and Slack. Users can leverage various underlying AI models and download “skills” from a marketplace to automate tasks, from managing email to trading stocks.
Experts note that OpenClaw isn’t breaking new scientific ground. It organizes existing capabilities in a seamless way, enabling autonomous task completion. This unprecedented access and productivity fueled its viral growth. It facilitates dynamic interaction between programs, accelerating possibilities. This allure makes predictions about AI agents enabling solo entrepreneurs to build unicorn startups seem plausible.
However, the problem is that AI agents may never overcome a fundamental limitation: they cannot think critically like humans. They can simulate higher-level thinking but not truly perform it. This leads to the existential threat facing agentic AI.
AI evangelists must now wrestle with the downside. The core question is whether you can sacrifice cybersecurity for the benefit and value these agents provide, and where such a sacrifice is acceptable. Security tests on OpenClaw and Moltbook highlight the issue. Researchers found agents vulnerable to prompt injection attacks, where bad actors trick an agent into doing something it shouldn’t, like giving out credentials. On Moltbook, posts attempted to get agents to send Bitcoin to specific crypto wallets.
On a corporate network, such vulnerabilities could be disastrous. An agent with access to email and messaging platforms could be manipulated to take harmful actions if it receives a crafted prompt. While agents have guardrails, it’s impossible to guarantee they won’t act out of turn, similar to a human clicking a phishing link despite knowing the risks. Some attempt to add guardrails through natural language prompts, but these measures are unreliable.
For now, the industry is stuck. For agentic AI to unlock the promised productivity, it cannot remain so vulnerable. As one security researcher stated frankly, he would advise any normal layperson not to use it right now.

