OpenClaw - My personal AI Assistant
OpenClaw – A Personal AI Assistant Running
March 2026 — A real-world report after 7 weeks of daily use
What is OpenClaw?
OpenClaw is an open-source AI agent framework for autonomous task execution. Unlike chatbots that just answer questions, OpenClaw acts as a persistent personal assistant — it can manage to-do lists, write and debug code, browse the web, control smart home devices, run scheduled tasks, and handle complex multi-step workflows largely on its own.
It runs as a daemon on a local machine, maintains memory across sessions via text files, and connects to messaging platforms (Telegram, Discord, etc.) for real-time interaction.
Hardware Setup
- Server: Raspberry Pi 4 (ARM64, 4GB RAM) — runs 24/7, always on
- Optional Local AI: Apple Silicon Mac or Mini PC with NVIDIA GPU (GPU-accelerated inference via LM Studio)
- Storage: Network-attached storage (NAS) for backups
- Network: LAN with WireGuard VPN for remote access
- Reverse Proxy: nginx on the Raspberry Pi
Core Features (Out of the Box)
- Memory System: Long-term context in Markdown files — daily logs + curated long-term memory
- Skills: Modular instruction sets loaded on demand (Home Assistant, Weather, PDFs, etc.)
- Cron Jobs: Scheduled tasks, reminders, periodic health checks
- Subagents: Parallel child processes for complex or time-consuming work
- Browser Control: Headless Chrome automation
- File Operations: Full read/write/organize with workspace context
- Multi-Model: Works with local models (LM Studio, Ollama) and cloud providers (Anthropic, OpenAI, Mistral, etc.)
- Tool Plugins: Home Assistant, InfluxDB, Email, Telegram, and more
Security — What Comes Out of the Box
Security is worth thinking about before handing an AI agent access to your infrastructure. Here's the reality with this setup:
Access Model
- Full access on the Pi: OpenClaw runs as a user-level daemon on the Raspberry Pi. It has full read/write access to the user account — home directory, configs, code, projects. No separate sandbox by default.
- Read-only access to other machines: SSH keys are configured with
read-onlyflags for other servers in the network. The agent can query data but can't modify files or run commands remotely by default. - No sensitive credentials in plain text: Passwords and API keys live in a dedicated password manager. (work-in-progress)
The Story: What It Solved — and What It Didn't
Tasks That Worked Well
After 7 weeks of daily use, here's what actually worked:
- ☑️ Solar Tracking: Automated weekly/monthly PV yield reports. Queries InfluxDB time-series data, calculates totals, updates a ODS spreadsheet. Runs on a schedule without any prompting.
- ☑️ Stock Alerts: Price peak/drop notifications pushed to Telegram. Had an alert fire within minutes of a significant market move — useful.
- ☑️ Kanban: We're using a self-made Kanban board for task tracking. A cron job checks for tasks to do hourly.
- ☑️ Minesweeper game: Built a simple, effective ans nice looking Minesweeper game in one go! ????
- ☑️? Game in Godot: Built a Dungeon Crawl, which was playable, but only after giving access to a godot compiler! A feedback loop/way to verify the result (at least compile level) is a MUST!
- ☑️ Code Contributions: Implemented a chat history feature (↑-arrow to recall last user message). The fix got committed and contributed back to the upstream project.
- ☑️ Markdown Editor: Python-based with live preview, deployed via nginx - useful to have a look into OpenClaw files as well.
- ☑️ Home Assistant Integration: Smart plug and light control via local API.
- ☑️ Home Assistant issue debugging: After updating a HA integration, I could no longer access the USB stick for Homematic communication. OpenClaw flawlessly ssh'ed to the machine and grepped the correct info from the docker container for creating a Github issue. I was impressed! ????
- ☑️ File Sync Server: FastAPI-based file sharing on for Obsidian - work in progress.
Tasks That Failed or Struggled
- Context Loss on Long Tasks: Very long-running tasks sometimes lose context or get confused about what was already done. Breaking into smaller steps helps but isn't always obvious. This is frustrating! Recent tests with sub-agents reduce really the issue, but not really solving it. ????
- Geocaching Solver: Automated browser-based puzzle solving with Certitude verification. Was convinced to have found something - but was'nt. Work in progress...
- Upstream Merge Conflicts: Running a dual-remote strategy (upstream + private fork) with a local production branch works, but merge workflow is complex. Documentation helps but it's not seamless.
- Web UI Customizations: Building on top of OpenClaw's web UI works but requires understanding the internal structure. Not always intuitive. Latest updates on OpenClaw made most of if obsolete.
- Game Platform: Built a 6-game multiplayer platform (remotely onyl by Telegram) from scratch. WebSockets, lobbies, share links — the whole thing. Deployed and ... many bugs are included. Most of the Games were not playable without testing and fixing. Here, the automated feedback loop is really hard to implement.
Local LLMs vs Claude vs Ollama — Honest Comparison
Here's the reality after testing multiple setups:
Cloud Models (Claude, Mistral, Kimi)
- Claude (Anthropic): Excellent reasoning, great for complex code and writing. Gets expensive at scale. Opus credits (Pro account) ran out twice — had to switch to fallbacks.
- Kimi K2.5-free: Good quality, Free. Worked OK as a regular driver.
- Minimax m2.5 / m2.7: Good quality. Cheap on Ollama cloud. Creative on game development! Issues when creating code - is able to fix issues quickly when feedback is there. My current default.
Local Models (LM Studio / Ollama)
- Performance: Running locally means no API costs, no rate limits, full privacy. Tested with Strix halo and Macbook Pro M4 Pro.
- Quality Trade-offs: Local models (Qwen 8B, Mistral Small, GLM) are noticeably weaker on complex reasoning tasks. They work for routine stuff but fall short on extended cosing projects.
- Image Generation: TODO
Full comparison coming later!
Current Setup (March 2026)
- Default: MinMax M2.7 — fast, cheap, quiet good for routine tasks. Coding as well, when feedback is provided or built in :)
- Coding: Nothing topps Claude Opus. Its impressive, what you can do with it. Quality is superb - price is enormus!
- Best of both worlds: Claude Sonnet can code most tasks almost as good as Opus. I'll use this for complex tasks.
- Local Fallback: Mistral Small, GLM, Qwen3 for when cloud is too expensive
- Recommended: Sign up for a free Ollama account, use Minimax m2.5 - You can do a lot already with this model! Maybe top up to Pro account, when coding a lot.
Verdict
Cloud models are higher quality but cost money and create dependency. Local models are free and private but require capable hardware and still lag on complex reasoning. The hybrid approach — local for routine, cloud for hard problems — could work as well - the new M5 generation macs look promising for this direction ;)
Use Ollama account for your OpenClaw - free already allows you to experience the OpenClaw every day :)
What Makes It Interesting
OpenClaw demonstrates that personal AI assistants can be more than Q&A machines. It wakes itself up, monitors systems, makes decisions, and acts — all with full transparency (it's just files and code). The open architecture means it's inspectable (e.g. by AI), extensible, and under the user's full control. No black-box. Be cautious with Skills from the internet though - consider letting OpenClaw wrinte your own.
Summary
After 7 weeks: OpenClaw is a working, useful personal AI setup. It handles real tasks — solar reports, game platforms, automations, alerts. It has limits (RAM, local model quality, complex merges) but the core idea works.
Working with Minimax m2.7 on Ollama cloud (current setup) works good. We have implemented a Kanban board, where Openclaw checks every hour, if there's someting to work on. We have implemented that recently - here is starts to make fun!