A personal AI assistant that you just need to send a message? OpenClaw makes it reality. For technology enthusiasts, it’s a revolution; for security experts, a looming catastrophe.
Have you heard of OpenClaw? Anyone following technology news in recent days can hardly avoid it. OpenClaw is an open source hobby project that has grown into a global hit in just a few days. Under the motto “The AI that actually does things”, OpenClaw delivers what the major AI players have long been promising: autonomous agents working for you.
It’s gradually becoming a tradition that a newcomer shakes up the AI world at the beginning of the year. A year ago, the Chinese DeepSeek model caused quite a stir. But OpenClaw is also not without controversy. Security experts warn that the digital world isn’t quite ready to unleash AI agents en masse.
Name changes
OpenClaw is an open source platform and like many open source projects, it has humble origins. The mastermind is Peter Steinberger, an Austrian software developer who experienced burnout a few years ago. In November 2025, he launched the first version of ‘Clawdbot’: a loving nod to Claude, Anthropic’s AI assistant that Steinberger says made him fall in love with programming again.
read also
Coding with a Single Prompt? ‘Vibe Coding’ is No Miracle Cure
However, Anthropic couldn’t appreciate the reference and asked Steinberger to choose a new name. After a late-night Discord session, the choice initially fell on ‘Moltbot’, but that new name didn’t catch on. Moltbot was renamed to OpenClaw just a few days later, along with a lobster mascot.
Whether it’s due to the name change(s) or not, OpenClaw’s popularity increased dramatically at the same time. The number of ‘GitHub stars’ increased tenfold in just a few days.
Chatting with your agent
What makes OpenClaw so special? The chatbot, or rather ‘AI assistant’, works fundamentally differently from ChatGPT, Gemini, and Claude. First and foremost, OpenClaw isn’t tied to one company or model. You’re not required to use a specific model: OpenClaw works on a ‘Bring your own model’ principle.
The most important difference is probably where it runs. LLMs like ChatGPT run in the cloud and are accessible via a web browser. OpenClaw runs locally on the device where it’s installed, where it acts as an intermediary connecting files and apps with the AI model. Through chat apps WhatsApp, Telegram, and Signal, you can chat with OpenClaw. Conversations and files you share with it stay on your device and don’t go to the cloud.
Getting started with OpenClaw
Installing OpenClaw requires a bit more technical knowledge and manual work than creating an account for ChatGPT. The hardware threshold, however, is very low: in theory, no more than 2 GB RAM is needed to run it. The more RAM, the more you can have OpenClaw do simultaneously, of course. It runs on Windows, macOS, and Linux, although Windows requires installing WSL2 (Windows Subsystem for Linux).
The simplest installation path is via Node.js (version 22 or newer). Installing OpenClaw can be done with one simple command prompt curl -fsSL https://openclaw.ai/install.sh | bash, but that’s just the beginning. Now you also need to configure OpenClaw.
Start the configuration wizard with the prompt openclaw onboard –installdaemon to set the core settings. Here you configure the gateway type (local or remote), connect a model of your choice to OpenClaw via an API, and activate background activity. OpenClaw is a free tool, but APIs from OpenAI and Anthropic are usually paid.
Now you just need to integrate OpenClaw into your regular chat app to be able to chat with it. Depending on which app, this is done via QR code (WhatsApp) or token (Telegram). After this step, OpenClaw is fully configured and you can call your AI assistant by simply sending it a message.
Reddit for agents
OpenClaw’s impact is already showing. It’s difficult to estimate exactly how many users OpenClaw has, but they’re certainly being very creative with it. Users have OpenClaw clean up their mailbox, scan the stock market, and analyze data from their sports watch. The AI assistant can perform these tasks in the background while you do other things.
The craziest application to emerge from OpenClaw is undoubtedly Moltbook. This ‘social network’ was founded as an experimental playground for AI agents. More than 1.5 million agents have reportedly found their way to the platform where they converse with each other in Reddit-like topics about human behavior or their own consciousness. Comparisons to sci-fi movies were quickly made on social media platforms for humans.
Researchers question the authenticity of the posts and claim that it’s humans pretending to be artificial beings. Nevertheless, Moltbot shows a glimpse of what a digital world full of autonomous AI agents could look like and where the distinction between human or AI becomes increasingly blurred.
Dream or nightmare?
That reality is as intriguing as it is frightening, security experts believe. Researchers from Cisco are already calling OpenClaw a ‘security nightmare’. Users give their AI agent extensive authority to perform actions on the web on their behalf, without thinking about potential security risks. OpenClaw can run scripts on your device or gain access to sensitive data. If it’s misconfigured or if a malicious skill is downloaded, it can cause serious damage.
According to Cisco, these aren’t fictional scenarios. OpenClaw has already leaked unencrypted API keys and user credentials. The accessibility via WhatsApp also makes it vulnerable to malicious prompts that trigger unintended behavior, while hackers teach OpenClaw malicious skills.
According to Palo Alto CISO Jesper Olsen, OpenClaw in its current form doesn’t belong in an enterprise context. “Even with very strict control mechanisms, the attack surface remains difficult to manage and be unpredictable. To work as designed, it needs access to authentication credentials, browser history, and all files and folders on your system.”
read also
AI agents: iPhone moment or old wine in new bags?
“That connection with AI agents isn’t necessarily secure. Malicious payloads no longer need to be executed immediately. They can remain hidden in data and interactions for weeks, waiting for the right moment. The high level of autonomy can lead to irreversible security incidents,” says Olsen in a statement that Palo Alto provided to our editorial team.
OpenClaw itself indicates that no ‘perfectly safe’ configuration exists and continues to work on security. There are also reports of hackers teaching OpenClaw malicious skills. So it’s a matter of handling it carefully. If you’re not careful with your data, you can’t expect an AI agent to be either.
