Brand LogoBrand Logo (Dark)
HomeAI AgentsToolkitsGitHub PicksSubmit AgentBlog

Categories

  • Art Generators
  • Audio Generators
  • Automation Tools
  • Chatbots & AI Agents
  • Code Tools
  • Financial Tools

Categories

  • Large Language Models
  • Marketing Tools
  • No-Code & Low-Code
  • Research & Search
  • Video & Animation
  • Video Editing

GitHub Picks

  • DeerFlow — ByteDance Open-Source SuperAgent Harness

Latest Blogs

  • OpenClaw vs Composer 2 Which AI Assistant Delivers More Value
  • Google AI Studio vs Anthropic Console
  • Stitch 2.0 vs Lovable Which AI Design Tool Wins in 2026
  • Monetizing AI for Solopreneurs and Small Teams in 2026
  • OpenClaw vs MiniMax Which AI Assistant Wins in 2026

Latest Blogs

  • OpenClaw vs KiloClaw Is Self-Hosting Still Better
  • OpenClaw vs Kimi Claw
  • GPT-5.4 vs Gemini 3.1 Pro
  • Farewell to Bloomberg Terminal as Perplexity Computer AI Redefines Finance
  • Best Practices for OpenClaw
LinkStartAI© 2026 LinkstartAI. All rights reserved.
Contact UsAbout
  1. Home
  2. GitHub Picks
  3. nanobot
nanobot logo

nanobot

An ultra-lightweight AI agent framework by HKUDS with ~4,000 lines of pure Python, supporting multiple chat platforms, LLM providers, and MCP—a perfect alternative to bloated OpenClaw.
22.5kPythonMIT License
#ai-agent#multi-platform#mcp#python#lightweight
#personal-assistant
#alternative-to-openclaw
#alternative-to-clawdbot

What is it?

nanobot was born out of a profound rethinking of complex mainstream AI agent systems (like OpenClaw). Developed by the Data Intelligence Lab at Hong Kong University (HKUDS), it redefines the boundaries of a personal AI assistant using a minimalist engineering philosophy. Within just ~4,000 lines of core code, it compresses complete and powerful capabilities: supporting access to over ten mainstream chat channels such as WeChat, Discord, WhatsApp, QQ, and Telegram, while seamlessly switching between underlying large models like OpenAI Codex, GitHub Copilot, and Anthropic. More importantly, it natively supports the Model Context Protocol (MCP), allowing external tools to be plugged in via standard interfaces without turning the entire infrastructure into a tangled mess. For geeks fed up with deploying dozens of microservices and consuming gigabytes of memory, this is a cyber companion that can boot instantly on extremely cheap hardware while offering extremely high readability and customizability.

Pain Points vs Innovation

✕Traditional Pain Points✓Innovative Solutions
Mainstream omnipotent AI agents (like OpenClaw) often contain hundreds of thousands of lines of code, are bloated to deploy, and consume massive memory, making them hard for solo devs to read, customize, or run on cheap devices.Through extremely restrained architectural abstraction, it compresses the core agent logic to just ~4,000 lines of Python, drastically lowering the barrier to understanding and cutting boot times to under 1 second.
Most AI assistants are too deeply tied to a single platform. Getting the same memory-persistent Agent to serve you across Telegram, WhatsApp, and your workspace requires painfully patching together tons of glue code.Abstracts a unified Channels layer and Bus design, reducing multi-platform chat integrations to plug-and-play configs, and decoupling the core from any single LLM provider to eliminate vendor lock-in.

Architecture Deep Dive

Event-Bus-Driven Modular Core
It ditches highly entangled monolithic designs in favor of a minimalist internal message bus architecture. The agent's brain (LLM loop), persistent memory modules, and external communication channels communicate via standardized events on this bus with extreme loose coupling. This implies that whether receiving a message from QQ or pushing rich media to Discord, the core logic sees a consistently standardized payload. This architecture ensures the system maintains incredibly low resource footprints and high stability even when connected to multiple heterogeneous platforms simultaneously.
Painless Extension via MCP (Model Context Protocol)
Unlike frameworks that force developers to hardcode tool logic and recompile, it deeply integrates the cutting-edge MCP standard. By simply pointing to an independent external MCP server address in the config file (such as one querying a database or manipulating local files), the Agent dynamically discovers and masters these new tools at runtime. This creates a physical isolation between the 'capability layer' and the 'agent layer'; you can write MCP toolsets in any language and plug them into the main framework safely and lightly.

Deployment Guide

1. Install the core framework dependencies directly via PyPI

bash
1pip install nanobot

2. Run the onboard command to generate the default config skeleton (created in ~/.nanobot/)

bash
1nanobot onboard

3. Summon your personal minimalist AI agent directly in the terminal and start interacting

bash
1nanobot agent

Use Cases

Core SceneTarget AudienceSolutionOutcome
Cross-Platform Automated Customer SupportSolo creators or micro-enterprisesMount Discord, Telegram, and WeChat channels simultaneously on a single instanceAchieve context-consistent customer support across all channels with incredibly low server overhead
DevOps Pipeline Automation ButlerR&D Team LeadsWrite MCP services for internal enterprise systems and dock them to the AgentSafely query deployment status using natural language inside Slack or DingTalk by invoking internal APIs
Low-Power Hardware CompanionRaspberry Pi and edge device enthusiastsDeploy pure local models with the framework as a home hubRun a complete agent loop smoothly even on low-spec single-board computers with just hundreds of MBs of RAM

Limitations & Gotchas

Limitations & Gotchas
  • For highly security-sensitive applications, be aware that its early versions had an authentication vulnerability in the WhatsApp Bridge's WebSocket implementation; always upgrade to v0.1.3.post7 or higher.
  • Because the codebase pursues extreme lightweightness, its built-in skill sets and exception-fallback mechanisms are sparse, and it might not be as comprehensive as commercial-grade frameworks when facing highly complex concurrent multi-agent coordination tasks.

Frequently Asked Questions

What are the core advantages of nanobot compared to OpenClaw?▾
The biggest advantage is its extreme lightweightness and minimalist design. Compared to the behemoth OpenClaw which boasts over 400,000 lines of code, this project uses only around 4,000 lines of pure Python to retain the most essential features of a personal assistant (multi-channel distribution, persistent memory, tool invocation). This means its startup latency is in the milliseconds and its memory footprint is typically under a hundred megabytes, making it highly suitable for deployment in resource-constrained environments or for developers to study and fork the underlying source code.
How do I make the Agent operate files or services on my local computer?▾
It is highly recommended to integrate via MCP (Model Context Protocol). Instead of directly modifying the bot's source code, you can find or write an MCP-compliant local tool server endpoint, and then configure its port in the config JSON. This allows the agent to read your exposed underlying OS interfaces like flipping through a manual, maintaining security boundaries while keeping the main framework from becoming increasingly bloated.
View on GitHub

Project Metrics

Stars22.5 k
LanguagePython
LicenseMIT License
Deploy DifficultyEasy

Table of Contents

  1. 01What is it?
  2. 02Pain Points vs Innovation
  3. 03Architecture Deep Dive
  4. 04Deployment Guide
  5. 05Use Cases
  6. 06Limitations & Gotchas
  7. 07Frequently Asked Questions

Related Projects

OpenClaw
OpenClaw
25.1 k·TypeScript
CoPaw
CoPaw
1.1 k·Python
Awesome LLM Apps
Awesome LLM Apps
96.4 k·Python
claude-mem
claude-mem
29.7 k·TypeScript