|

A Coding Guide to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Tools and Memory to Skills, Subagents, and Cron Scheduling

🔹

In this tutorial, we take a deep dive into nanobot, the ultra-lightweight private AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 traces of Python. Rather than merely putting in and working it out of the field, we crack open the hood and manually recreate every of its core subsystems, the agent loop, software execution, reminiscence persistence, expertise loading, session administration, subagent spawning, and cron scheduling, so we perceive precisely how they work. We wire all the things up with OpenAI’s gpt-4o-mini as our LLM supplier, enter our API key securely via the terminal (by no means exposing it in pocket book output), and progressively construct from a single tool-calling loop all the way in which to a multi-step analysis pipeline that reads and writes recordsdata, shops long-term recollections, and delegates duties to concurrent background employees. By the top, we don’t simply know the way to use nanobots, we perceive how to lengthen them with customized instruments, expertise, and our personal agent architectures.

import sys
import os
import subprocess


def part(title, emoji="🔹"):
   """Pretty-print a piece header."""
   width = 72
   print(f"n{'═' * width}")
   print(f"  {emoji}  {title}")
   print(f"{'═' * width}n")


def information(msg):
   print(f"  ℹ  {msg}")


def success(msg):
   print(f"  ✅ {msg}")


def code_block(code):
   print(f"  ┌─────────────────────────────────────────────────")
   for line in code.strip().cut up("n"):
       print(f"  │ {line}")
   print(f"  └─────────────────────────────────────────────────")


part("STEP 1 · Installing nanobot-ai & Dependencies", "📦")


information("Installing nanobot-ai from PyPI (newest secure)...")
subprocess.check_call([
   sys.executable, "-m", "pip", "install", "-q",
   "nanobot-ai", "openai", "rich", "httpx"
])
success("nanobot-ai put in efficiently!")


import importlib.metadata
nanobot_version = importlib.metadata.model("nanobot-ai")
print(f"  📌 nanobot-ai model: {nanobot_version}")


part("STEP 2 · Secure OpenAI API Key Input", "🔑")


information("Your API key will NOT be printed or saved in pocket book output.")
information("It is held solely in reminiscence for this session.n")


attempt:
   from google.colab import userdata
   OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
   if not OPENAI_API_KEY:
       elevate WorthError("Not set in Colab secrets and techniques")
   success("Loaded API key from Colab Secrets ('OPENAI_API_KEY').")
   information("Tip: You can set this in Colab → 🔑 Secrets panel on the left sidebar.")
besides Exception:
   import getpass
   OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ")
   success("API key captured securely by way of terminal enter.")


os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY


import openai
shopper = openai.OpenAI(api_key=OPENAI_API_KEY)
attempt:
   shopper.fashions.checklist()
   success("OpenAI API key validated — connection profitable!")
besides Exception as e:
   print(f"  ❌ API key validation failed: {e}")
   print("     Please restart and enter a legitimate key.")
   sys.exit(1)


part("STEP 3 · Configuring nanobot for OpenAI", "⚙")


import json
from pathlib import Path


NANOBOT_HOME = Path.residence() / ".nanobot"
NANOBOT_HOME.mkdir(dad and mom=True, exist_ok=True)


WORKSPACE = NANOBOT_HOME / "workspace"
WORKSPACE.mkdir(dad and mom=True, exist_ok=True)
(WORKSPACE / "reminiscence").mkdir(dad and mom=True, exist_ok=True)


config = {
   "suppliers": {
       "openai": {
           "apiKey": OPENAI_API_KEY
       }
   },
   "brokers": {
       "defaults": {
           "mannequin": "openai/gpt-4o-mini",
           "maxTokens": 4096,
           "workspace": str(WORKSPACE)
       }
   },
   "instruments": {
       "restrictToWorkspace": True
   }
}


config_path = NANOBOT_HOME / "config.json"
config_path.write_text(json.dumps(config, indent=2))
success(f"Config written to {config_path}")


agents_md = WORKSPACE / "AGENTS.md"
agents_md.write_text(
   "# Agent Instructionsnn"
   "You are nanobot 🐈, an ultra-lightweight private AI assistant.n"
   "You are useful, concise, and use instruments when wanted.n"
   "Always clarify your reasoning step-by-step.n"
)


soul_md = WORKSPACE / "SOUL.md"
soul_md.write_text(
   "# Personalitynn"
   "- Friendly and approachablen"
   "- Technically precisen"
   "- Uses emoji sparingly for warmthn"
)


user_md = WORKSPACE / "USER.md"
user_md.write_text(
   "# User Profilenn"
   "- The consumer is exploring the nanobot framework.n"
   "- They are all for AI agent architectures.n"
)


memory_md = WORKSPACE / "reminiscence" / "MEMORY.md"
memory_md.write_text("# Long-term Memorynn_No recollections saved but._n")


success("Workspace bootstrap recordsdata created:")
for f in [agents_md, soul_md, user_md, memory_md]:
   print(f"     📄 {f.relative_to(NANOBOT_HOME)}")


part("STEP 4 · nanobot Architecture Deep Dive", "🏗")


information("""nanobot is organized into 7 subsystems in ~4,000 traces of code:


 ┌──────────────────────────────────────────────────────────┐
 │                    USER INTERFACES                       │
 │         CLI  ·  Telegram  ·  WhatsApp  ·  Discord        │
 └──────────────────┬───────────────────────────────────────┘
                    │  InboundMessage / OutboundMessage
 ┌──────────────────▼───────────────────────────────────────┐
 │                    MESSAGE BUS                           │
 │          publish_inbound() / publish_outbound()          │
 └──────────────────┬───────────────────────────────────────┘
                    │
 ┌──────────────────▼───────────────────────────────────────┐
 │                  AGENT LOOP (loop.py)                    │
 │    ┌─────────┐  ┌──────────┐  ┌────────────────────┐    │
 │    │ Context  │→ │   LLM    │→ │  Tool Execution    │    │
 │    │ Builder  │  │  Call    │  │  (if tool_calls)   │    │
 │    └─────────┘  └──────────┘  └────────┬───────────┘    │
 │         ▲                              │  loop again     │
 │         │          ◄───────────────────┘  till carried out    │
 │    ┌────┴────┐  ┌──────────┐  ┌────────────────────┐    │
 │    │ Memory  │  │  Skills  │  │   Subagent Mgr     │    │
 │    │ Store   │  │  Loader  │  │   (spawn duties)    │    │
 │    └─────────┘  └──────────┘  └────────────────────┘    │
 └──────────────────────────────────────────────────────────┘
                    │
 ┌──────────────────▼───────────────────────────────────────┐
 │               LLM PROVIDER LAYER                         │
 │     OpenAI · Anthropic · OpenRouter · DeepSearch · ...     │
 └───────────────────────────────────────────────────────────┘


 The Agent Loop iterates up to 40 instances (configurable):
   1. ContextBuilder assembles system immediate + reminiscence + expertise + historical past
   2. LLM known as with instruments definitions
   3. If response has tool_calls → execute instruments, append outcomes, loop
   4. If response is apparent textual content → return as closing reply
""")

We arrange the complete basis of the tutorial by importing the required modules, defining helper features for clear part show, and putting in the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the remainder of the pocket book can work together with the mannequin with out exposing credentials within the pocket book output. After that, we configure the nanobot workspace and create the core bootstrap recordsdata, similar to AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and research the high-level structure so we perceive how the framework is organized earlier than shifting into implementation.

part("STEP 5 · The Agent Loop — Core Concept in Action", "🔄")


information("We'll manually recreate nanobot's agent loop sample utilizing OpenAI.")
information("This is strictly what loop.py does internally.n")


import json as _json
import datetime


TOOLS = [
   {
       "type": "function",
       "function": {
           "name": "get_current_time",
           "description": "Get the current date and time.",
           "parameters": {"type": "object", "properties": {}, "required": []}
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "calculate",
           "description": "Evaluate a mathematical expression.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "expression": {
                       "sort": "string",
                       "description": "Math expression to consider, e.g. '2**10 + 42'"
                   }
               },
               "required": ["expression"]
           }
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "read_file",
           "description": "Read the contents of a file within the workspace.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "path": {
                       "sort": "string",
                       "description": "Relative file path inside the workspace"
                   }
               },
               "required": ["path"]
           }
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "write_file",
           "description": "Write content material to a file within the workspace.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "path": {"sort": "string", "description": "Relative file path"},
                   "content material": {"sort": "string", "description": "Content to write"}
               },
               "required": ["path", "content"]
           }
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "save_memory",
           "description": "Save a truth to the agent's long-term reminiscence.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "truth": {"sort": "string", "description": "The truth to keep in mind"}
               },
               "required": ["fact"]
           }
       }
   }
]


def execute_tool(identify: str, arguments: dict) -> str:
   """Execute a software name — mirrors nanobot's ToolRegistry.execute()."""
   if identify == "get_current_time":


   elif identify == "calculate":
       expr = arguments.get("expression", "")
       attempt:
           outcome = eval(expr, {"__builtins__": {}}, {"abs": abs, "spherical": spherical, "min": min, "max": max})
           return str(outcome)
       besides Exception as e:
           return f"Error: {e}"


   elif identify == "read_file":
       fpath = WORKSPACE / arguments.get("path", "")
       if fpath.exists():
           return fpath.read_text()[:4000]
       return f"Error: File not discovered — {arguments.get('path')}"


   elif identify == "write_file":
       fpath = WORKSPACE / arguments.get("path", "")
       fpath.mum or dad.mkdir(dad and mom=True, exist_ok=True)
       fpath.write_text(arguments.get("content material", ""))
       return f"Successfully wrote {len(arguments.get('content material', ''))} chars to {arguments.get('path')}"


   elif identify == "save_memory":
       truth = arguments.get("truth", "")
       mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
       present = mem_file.read_text()
       timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M")
       mem_file.write_text(present + f"n- [{timestamp}] {truth}n")
       return f"Memory saved: {truth}"


   return f"Unknown software: {identify}"




def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True):
   """
   Recreates nanobot's AgentLoop._process_message() logic.


   The loop:
     1. Build context (system immediate + bootstrap recordsdata + reminiscence)
     2. Call LLM with instruments
     3. If tool_calls → execute → append outcomes → loop
     4. If textual content response → return closing reply
   """
   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())


   mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
   if mem_file.exists():
       system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")


   system_prompt = "nn".be part of(system_parts)


   messages = [
       {"role": "system", "content": system_prompt},
       {"role": "user", "content": user_message}
   ]


   if verbose:
       print(f"  📨 User: {user_message}")
       print(f"  🧠 System immediate: {len(system_prompt)} chars "
             f"(from {len(system_parts)} bootstrap recordsdata)")
       print()


   for iteration in vary(1, max_iterations + 1):
       if verbose:
           print(f"  ── Iteration {iteration}/{max_iterations} ──")


       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=messages,
           instruments=TOOLS,
           tool_choice="auto",
           max_tokens=2048
       )


       selection = response.selections[0]
       message = selection.message


       if message.tool_calls:
           if verbose:
               print(f"  🔧 LLM requested {len(message.tool_calls)} software name(s):")


           messages.append(message.model_dump())


           for tc in message.tool_calls:
               fname = tc.perform.identify
               args = _json.hundreds(tc.perform.arguments) if tc.perform.arguments else {}


               if verbose:
                   print(f"     → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")


               outcome = execute_tool(fname, args)


               if verbose:
                   print(f"     ← {outcome[:100]}{'...' if len(outcome) > 100 else ''}")


               messages.append({
                   "position": "software",
                   "tool_call_id": tc.id,
                   "content material": outcome
               })


           if verbose:
               print()


       else:
           closing = message.content material or ""
           if verbose:
               print(f"  💬 Agent: {closing}n")
           return closing


   return "⚠ Max iterations reached with no closing response."




print("─" * 60)
print("  DEMO 1: Time-aware calculation with software chaining")
print("─" * 60)
result1 = agent_loop(
   "What is the present time? Also, calculate 2^20 + 42 for me."
)


print("─" * 60)
print("  DEMO 2: File creation + reminiscence storage")
print("─" * 60)
result2 = agent_loop(
   "Write a haiku about AI brokers to a file known as 'haiku.txt'. "
   "Then do not forget that I get pleasure from poetry about know-how."
)

We manually recreate the guts of nanobot by defining the software schemas, implementing their execution logic, and constructing the iterative agent loop that connects the LLM to instruments. We assemble the immediate from the workspace recordsdata and reminiscence, ship the dialog to the mannequin, detect software calls, execute them, append the outcomes again into the dialog, and maintain looping till the mannequin returns a closing reply. We then check this mechanism with sensible examples that contain time lookups, calculations, file writing, and reminiscence saving, so we will see the loop function precisely like the inner nanobot move.

part("STEP 6 · Memory System — Persistent Agent Memory", "🧠")


information("""nanobot's reminiscence system (reminiscence.py) makes use of two storage mechanisms:


 1. MEMORY.md  — Long-term details (all the time loaded into context)
 2. YYYY-MM-DD.md — Daily journal entries (loaded for current days)


 Memory consolidation runs periodically to summarize and compress
 previous entries, maintaining the context window manageable.
""")


mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print("  📂 Current MEMORY.md contents:")
print("  ┌─────────────────────────────────────────────")
for line in mem_content.strip().cut up("n"):
   print(f"  │ {line}")
print("  └─────────────────────────────────────────────n")


immediately = datetime.datetime.now().strftime("%Y-%m-%d")
daily_file = WORKSPACE / "reminiscence" / f"{immediately}.md"
daily_file.write_text(
   f"# Daily Log — {immediately}nn"
   "- User ran the nanobot superior tutorialn"
   "- Explored agent loop, instruments, and memoryn"
   "- Created a haiku about AI agentsn"
)
success(f"Daily journal created: reminiscence/{immediately}.md")


print("n  📁 Workspace contents:")
for merchandise in sorted(WORKSPACE.rglob("*")):
   if merchandise.is_file():
       rel = merchandise.relative_to(WORKSPACE)
       measurement = merchandise.stat().st_size
       print(f"     {'📄' if merchandise.suffix == '.md' else '📝'} {rel} ({measurement} bytes)")


part("STEP 7 · Skills System — Extending Agent Capabilities", "🎯")


information("""nanobot's SkillsLoader (expertise.py) reads Markdown recordsdata from the
expertise/ listing. Each talent has:
 - A identify and description (for the LLM to determine when to use it)
 - Instructions the LLM follows when the talent is activated
 - Some expertise are 'all the time loaded'; others are loaded on demand


Let's create a customized talent and see how the agent makes use of it.
""")


skills_dir = WORKSPACE / "expertise"
skills_dir.mkdir(exist_ok=True)


data_skill = skills_dir / "data_analyst.md"
data_skill.write_text("""# Data Analyst Skill


## Description
Analyze information, compute statistics, and present insights from numbers.


## Instructions
When requested to analyze information:
1. Identify the info sort and construction
2. Compute related statistics (imply, median, vary, std dev)
3. Look for patterns and outliers
4. Present findings in a transparent, structured format
5. Suggest follow-up questions


## Always Available
false
""")


review_skill = skills_dir / "code_reviewer.md"
review_skill.write_text("""# Code Reviewer Skill


## Description
Review code for bugs, safety points, and finest practices.


## Instructions
When reviewing code:
1. Check for frequent bugs and logic errors
2. Identify safety vulnerabilities
3. Suggest efficiency enhancements
4. Evaluate code type and readability
5. Rate the code high quality on a 1-10 scale


## Always Available
true
""")


success("Custom expertise created:")
for f in skills_dir.iterdir():
   print(f"     🎯 {f.identify}")


print("n  🧪 Testing skill-aware agent interplay:")
print("  " + "─" * 56)


skills_context = "nn## Available Skillsn"
for skill_file in skills_dir.glob("*.md"):
   content material = skill_file.read_text()
   skills_context += f"n### {skill_file.stem}n{content material}n"


result3 = agent_loop(
   "Review this Python code for points:nn"
   "```pythonn"
   "def get_user(id):n"
   "    question = f'SELECT * FROM customers WHERE id = {id}'n"
   "    outcome = db.execute(question)n"
   "    return resultn"
   "```"
)

We transfer into the persistent reminiscence system by inspecting the long-term reminiscence file, making a every day journal entry, and reviewing how the workspace evolves after earlier interactions. We then lengthen the agent with a expertise system by creating markdown-based talent recordsdata that describe specialised behaviors similar to information evaluation and code evaluation. Finally, we simulate how skill-aware prompting works by exposing these expertise to the agent and asking it to evaluation a Python perform, which helps us see how nanobot may be guided via modular functionality descriptions.

part("STEP 8 · Custom Tool Creation — Extending the Agent", "🔧")


information("""nanobot's software system makes use of a ToolRegistry with a easy interface.
Each software wants:
 - A identify and description
 - A JSON Schema for parameters
 - An execute() methodology


Let's create customized instruments and wire them into our agent loop.
""")


import random


CUSTOM_TOOLS = [
   {
       "type": "function",
       "function": {
           "name": "roll_dice",
           "description": "Roll one or more dice with a given number of sides.",
           "parameters": {
               "type": "object",
               "properties": {
                   "num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1},
                   "sides": {"type": "integer", "description": "Number of sides per die", "default": 6}
               },
               "required": []
           }
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "text_stats",
           "description": "Compute statistics a few textual content: phrase depend, char depend, sentence depend, studying time.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "textual content": {"sort": "string", "description": "The textual content to analyze"}
               },
               "required": ["text"]
           }
       }
   },
   {
       "sort": "perform",
       "perform": {
           "identify": "generate_password",
           "description": "Generate a random safe password.",
           "parameters": {
               "sort": "object",
               "properties": {
                   "size": {"sort": "integer", "description": "Password size", "default": 16}
               },
               "required": []
           }
       }
   }
]


_original_execute = execute_tool


def execute_tool_extended(identify: str, arguments: dict) -> str:
   if identify == "roll_dice":
       n = arguments.get("num_dice", 1)
       s = arguments.get("sides", 6)
       rolls = [random.randint(1, s) for _ in range(n)]
       return f"Rolled {n}d{s}: {rolls} (complete: {sum(rolls)})"


   elif identify == "text_stats":
       textual content = arguments.get("textual content", "")
       phrases = len(textual content.cut up())
       chars = len(textual content)
       sentences = textual content.depend('.') + textual content.depend('!') + textual content.depend('?')
       reading_time = spherical(phrases / 200, 1)
       return _json.dumps({
           "phrases": phrases,
           "characters": chars,
           "sentences": max(sentences, 1),
           "reading_time_minutes": reading_time
       })


   elif identify == "generate_password":
       import string
       size = arguments.get("size", 16)
       chars = string.ascii_letters + string.digits + "!@#$%^&*"
       pwd = ''.be part of(random.selection(chars) for _ in vary(size))
       return f"Generated password ({size} chars): {pwd}"


   return _original_execute(identify, arguments)


execute_tool = execute_tool_extended


ALL_TOOLS = TOOLS + CUSTOM_TOOLS


def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True):
   """Agent loop with prolonged customized instruments."""
   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())
   mem_file = WORKSPACE / "reminiscence" / "MEMORY.md"
   if mem_file.exists():
       system_parts.append(f"n## Your Memoryn{mem_file.read_text()}")
   system_prompt = "nn".be part of(system_parts)


   messages = [
       {"role": "system", "content": system_prompt},
       {"role": "user", "content": user_message}
   ]


   if verbose:
       print(f"  📨 User: {user_message}")
       print()


   for iteration in vary(1, max_iterations + 1):
       if verbose:
           print(f"  ── Iteration {iteration}/{max_iterations} ──")


       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=messages,
           instruments=ALL_TOOLS,
           tool_choice="auto",
           max_tokens=2048
       )
       selection = response.selections[0]
       message = selection.message


       if message.tool_calls:
           if verbose:
               print(f"  🔧 {len(message.tool_calls)} software name(s):")
           messages.append(message.model_dump())
           for tc in message.tool_calls:
               fname = tc.perform.identify
               args = _json.hundreds(tc.perform.arguments) if tc.perform.arguments else {}
               if verbose:
                   print(f"     → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})")
               outcome = execute_tool(fname, args)
               if verbose:
                   print(f"     ← {outcome[:120]}{'...' if len(outcome) > 120 else ''}")
               messages.append({
                   "position": "software",
                   "tool_call_id": tc.id,
                   "content material": outcome
               })
           if verbose:
               print()
       else:
           closing = message.content material or ""
           if verbose:
               print(f"  💬 Agent: {closing}n")
           return closing


   return "⚠ Max iterations reached."




print("─" * 60)
print("  DEMO 3: Custom instruments in motion")
print("─" * 60)
result4 = agent_loop_v2(
   "Roll 3 six-sided cube for me, then generate a 20-character password, "
   "and lastly analyze the textual content stats of this sentence: "
)


part("STEP 9 · Multi-Turn Conversation — Session Management", "💬")


information("""nanobot's SessionSupervisor (session/supervisor.py) maintains dialog
historical past per session_key (format: 'channel:chat_id'). History is saved
in JSON recordsdata and loaded into context for every new message.


Let's simulate a multi-turn dialog with persistent state.
""")

We broaden the agent’s capabilities by defining new customized instruments similar to cube rolling, textual content statistics, and password technology, and then wiring them into the software execution pipeline. We replace the executor, merge the built-in and customized software definitions, and create a second model of the agent loop that may motive over this bigger set of capabilities. We then run a demo job that forces the mannequin to chain a number of software invocations, demonstrating how straightforward it’s to lengthen nanobot with our personal features whereas maintaining the identical general interplay sample.

class EasySessionSupervisor:
   """
   Minimal recreation of nanobot's SessionSupervisor.
   Stores dialog historical past and supplies context continuity.
   """
   def __init__(self, workspace: Path):
       self.workspace = workspace
       self.periods: dict[str, list[dict]] = {}


   def get_history(self, session_key: str) -> checklist[dict]:
       return self.periods.get(session_key, [])


   def add_turn(self, session_key: str, position: str, content material: str):
       if session_key not in self.periods:
           self.periods[session_key] = []
       self.periods[session_key].append({"position": position, "content material": content material})


   def save(self, session_key: str):
       fpath = self.workspace / f"session_{session_key.exchange(':', '_')}.json"
       fpath.write_text(_json.dumps(self.periods.get(session_key, []), indent=2))


   def load(self, session_key: str):
       fpath = self.workspace / f"session_{session_key.exchange(':', '_')}.json"
       if fpath.exists():
           self.periods[session_key] = _json.hundreds(fpath.read_text())




session_mgr = EasySessionSupervisor(WORKSPACE)
SESSION_KEY = "cli:tutorial_user"




def chat(user_message: str, verbose: bool = True):
   """Multi-turn chat with session persistence."""
   session_mgr.add_turn(SESSION_KEY, "consumer", user_message)


   system_parts = []
   for md_file in ["AGENTS.md", "SOUL.md"]:
       fpath = WORKSPACE / md_file
       if fpath.exists():
           system_parts.append(fpath.read_text())
   system_prompt = "nn".be part of(system_parts)


   historical past = session_mgr.get_history(SESSION_KEY)
   messages = [{"role": "system", "content": system_prompt}] + historical past


   if verbose:
       print(f"  👤 You: {user_message}")
       print(f"     (dialog historical past: {len(historical past)} messages)")


   response = shopper.chat.completions.create(
       mannequin="gpt-4o-mini",
       messages=messages,
       max_tokens=1024
   )
   reply = response.selections[0].message.content material or ""


   session_mgr.add_turn(SESSION_KEY, "assistant", reply)
   session_mgr.save(SESSION_KEY)


   if verbose:
       print(f"  🐈 nanobot: {reply}n")
   return reply




print("─" * 60)
print("  DEMO 4: Multi-turn dialog with reminiscence")
print("─" * 60)


chat("Hi! My identify is Alex and I'm constructing an AI agent.")
chat("What's my identify? And what am I engaged on?")
chat("Can you recommend 3 options I ought to add to my agent?")


success("Session continued with full dialog historical past!")
session_file = WORKSPACE / f"session_{SESSION_KEY.exchange(':', '_')}.json"
session_data = _json.hundreds(session_file.read_text())
print(f"  📄 Session file: {session_file.identify} ({len(session_data)} messages)")


part("STEP 10 · Subagent Spawning — Background Task Delegation", "🚀")


information("""nanobot's SubagentSupervisor (agent/subagent.py) permits the primary agent
to delegate duties to unbiased background employees. Each subagent:
 - Gets its personal software registry (no SpawnTool to forestall recursion)
 - Runs up to 15 iterations independently
 - Reports outcomes again by way of the MessageBus


Let's simulate this sample with concurrent duties.
""")


import asyncio
import uuid




async def run_subagent(task_id: str, purpose: str, verbose: bool = True):
   """
   Simulates nanobot's SubagentSupervisor._run_subagent().
   Runs an unbiased LLM loop for a selected purpose.
   """
   if verbose:
       print(f"  🔹 Subagent [{task_id[:8]}] began: {purpose[:60]}")


   response = shopper.chat.completions.create(
       mannequin="gpt-4o-mini",
       messages=[
           {"role": "system", "content": "You are a focused research assistant. "
            "Complete the assigned task concisely in 2-3 sentences."},
           {"role": "user", "content": goal}
       ],
       max_tokens=256
   )


   outcome = response.selections[0].message.content material or ""
   if verbose:
       print(f"  ✅ Subagent [{task_id[:8]}] carried out: {outcome[:80]}...")
   return {"task_id": task_id, "purpose": purpose, "outcome": outcome}




async def spawn_subagents(targets: checklist[str]):
   """Spawn a number of subagents concurrently — mirrors SubagentSupervisor.spawn()."""
   duties = []
   for purpose in targets:
       task_id = str(uuid.uuid4())
       duties.append(run_subagent(task_id, purpose))


   print(f"n  🚀 Spawning {len(duties)} subagents concurrently...n")
   outcomes = await asyncio.collect(*duties)
   return outcomes




targets = [
   "What are the 3 key components of a ReAct agent architecture?",
   "Explain the difference between tool-calling and function-calling in LLMs.",
   "What is MCP (Model Context Protocol) and why does it matter for AI agents?",
]


attempt:
   loop = asyncio.get_running_loop()
   import nest_asyncio
   nest_asyncio.apply()
   subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(targets))
besides RuntimeError:
   subagent_results = asyncio.run(spawn_subagents(targets))
besides ModuleNotDiscoveredError:
   print("  ℹ  Running subagents sequentially (set up nest_asyncio for async)...n")
   subagent_results = []
   for purpose in targets:
       task_id = str(uuid.uuid4())
       response = shopper.chat.completions.create(
           mannequin="gpt-4o-mini",
           messages=[
               {"role": "system", "content": "Complete the task concisely in 2-3 sentences."},
               {"role": "user", "content": goal}
           ],
           max_tokens=256
       )
       r = response.selections[0].message.content material or ""
       print(f"  ✅ Subagent [{task_id[:8]}] carried out: {r[:80]}...")
       subagent_results.append({"task_id": task_id, "purpose": purpose, "outcome": r})


print(f"n  📋 All {len(subagent_results)} subagent outcomes collected!")
for i, r in enumerate(subagent_results, 1):
   print(f"n  ── Result {i} ──")
   print(f"  Goal: {r['goal'][:60]}")
   print(f"  Answer: {r['result'][:200]}")

We simulate multi-turn dialog administration by constructing a light-weight session supervisor that shops, retrieves, and persists dialog historical past throughout turns. We use that historical past to keep continuity within the chat, permitting the agent to keep in mind particulars from earlier within the interplay and reply extra coherently and statefully. After that, we mannequin subagent spawning by launching concurrent background duties that every deal with a centered goal, which helps us perceive how nanobot can delegate parallel work to unbiased agent employees.

part("STEP 11 · Scheduled Tasks — The Cron Pattern", "⏰")


information("""nanobot's CronService (cron/service.py) makes use of APScheduler to set off
agent actions on a schedule. When a job fires, it creates an
InboundMessage and publishes it to the MessageBus.


Let's reveal the sample with a simulated scheduler.
""")


from datetime import timedelta




class EasyCronJob:
   """Mirrors nanobot's cron job construction."""
   def __init__(self, identify: str, message: str, interval_seconds: int):
       self.id = str(uuid.uuid4())[:8]
       self.identify = identify
       self.message = message
       self.interval = interval_seconds
       self.enabled = True
       self.last_run = None
       self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds)




jobs = [
   SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400),
   SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200),
   SimpleCronJob("health_check", "Run a system health check.", 3600),
]


print("  📋 Registered Cron Jobs:")
print("  ┌────────┬────────────────────┬──────────┬──────────────────────┐")
print("  │ ID     │ Name               │ Interval │ Next Run             │")
print("  ├────────┼────────────────────┼──────────┼──────────────────────┤")
for job in jobs:
   interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s"
   print(f"  │ {job.id} │ {job.identify:<18} │ {interval_str:>8} │ {job.next_run.strftime('%Y-%m-%d %H:%M')} │")
print("  └────────┴────────────────────┴──────────┴──────────────────────┘")


print(f"n  ⏰ Simulating cron set off for '{jobs[2].identify}'...")
cron_result = agent_loop_v2(jobs[2].message, verbose=True)


part("STEP 12 · Full Agent Pipeline — End-to-End Demo", "🎬")


information("""Now let's run a posh, multi-step job that workouts the complete
nanobot pipeline: context constructing → software use → reminiscence → file I/O.
""")


print("─" * 60)
print("  DEMO 5: Complex multi-step analysis job")
print("─" * 60)


complex_result = agent_loop_v2(
   "I want you to assist me with a small challenge:n"
   "1. First, test the present timen"
   "2. Write a brief challenge plan to 'project_plan.txt' about constructing "
   "a private AI assistant (3-4 bullet factors)n"
   "3. Remember that my present challenge is 'constructing a private AI assistant'n"
   "4. Read again the challenge plan file to affirm it was saved correctlyn"
   "Then summarize all the things you probably did.",
   max_iterations=15
)


part("STEP 13 · Final Workspace Summary", "📊")


print("  📁 Complete workspace state after tutorial:n")
total_files = 0
total_bytes = 0
for merchandise in sorted(WORKSPACE.rglob("*")):
   if merchandise.is_file():
       rel = merchandise.relative_to(WORKSPACE)
       measurement = merchandise.stat().st_size
       total_files += 1
       total_bytes += measurement
       icon = {"md": "📄", "txt": "📝", "json": "📋"}.get(merchandise.suffix.lstrip("."), "📎")
       print(f"     {icon} {rel} ({measurement:,} bytes)")


print(f"n  ── Summary ──")
print(f"  Total recordsdata: {total_files}")
print(f"  Total measurement:  {total_bytes:,} bytes")
print(f"  Config:      {config_path}")
print(f"  Workspace:   {WORKSPACE}")


print("n  🧠 Final Memory State:")
mem_content = (WORKSPACE / "reminiscence" / "MEMORY.md").read_text()
print("  ┌─────────────────────────────────────────────")
for line in mem_content.strip().cut up("n"):
   print(f"  │ {line}")
print("  └─────────────────────────────────────────────")


part("COMPLETE · What's Next?", "🎉")


print("""  You've explored the core internals of nanobot! Here's what to attempt subsequent:


 🔹 Run the actual CLI agent:
    nanobot onboard && nanobot agent


 🔹 Connect to Telegram:
    Add a bot token to config.json and run `nanobot gateway`


 🔹 Enable internet search:
    Add a Brave Search API key underneath instruments.internet.search.apiKey


 🔹 Try MCP integration:
    nanobot helps Model Context Protocol servers for exterior instruments


 🔹 Explore the supply (~4K traces):
    https://github.com/HKUDS/nanobot


 🔹 Key recordsdata to learn:
    • agent/loop.py    — The agent iteration loop
    • agent/context.py — Prompt meeting pipeline
    • agent/reminiscence.py  — Persistent reminiscence system
    • agent/instruments/     — Built-in software implementations
    • agent/subagent.py — Background job delegation


""")

We reveal the cron-style scheduling sample by defining easy scheduled jobs, itemizing their intervals and subsequent run instances, and simulating the triggering of an automatic agent job. We then run a bigger end-to-end instance that mixes context constructing, software use, reminiscence updates, and file operations right into a single multi-step workflow, so we will see the complete pipeline working collectively in a sensible job. At the top, we examine the ultimate workspace state, evaluation the saved reminiscence, and shut the tutorial with clear subsequent steps that join this pocket book implementation to the actual nanobot challenge and its supply code.

In conclusion, we walked via each main layer of the nanobot’s structure, from the iterative LLM-tool loop at its core to the session supervisor that provides our agent conversational reminiscence throughout turns. We constructed 5 built-in instruments, three customized instruments, two expertise, a session persistence layer, a subagent spawner, and a cron simulator, all whereas maintaining all the things in a single runnable script. What stands out is how nanobot proves {that a} production-grade agent framework doesn’t want tons of of hundreds of traces of code; the patterns we carried out right here, context meeting, software dispatch, reminiscence consolidation, and background job delegation, are the identical patterns that energy far bigger methods, simply stripped down to their essence. We now have a working psychological mannequin of agentic AI internals and a codebase sufficiently small to learn in a single sitting, which makes nanobot a perfect selection for anybody wanting to construct, customise, or analysis AI brokers from the bottom up.


Check out the Full Codes hereAlso, be happy to observe us on Twitter and don’t neglect to be part of our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The submit A Coding Guide to Exploring nanobot’s Full Agent Pipeline, from Wiring Up Tools and Memory to Skills, Subagents, and Cron Scheduling appeared first on MarkTechPost.

Similar Posts