How to Build an MCP Style Routed AI Agent System with Dynamic Tool Exposure Planning, Execution, and Context Injection
In this tutorial, we construct a completely useful MCP-style routed agent system from scratch, combining software discovery, clever routing, structured planning, and execution right into a single cohesive workflow. We begin by organising a modular software server that exposes capabilities reminiscent of internet search, native retrieval, dataset loading, and Python execution, all outlined by structured schemas. We then implement a hybrid router that makes use of each heuristics and LLM reasoning to dynamically determine which instruments to expose for a given activity, guaranteeing minimal but efficient functionality publicity. As we progress, we design an agent that plans software utilization, executes calls safely, and synthesizes ultimate solutions by injecting context from software outputs. By the tip, we exhibit a number of real-world duties and present how MCP rules reminiscent of context injection, routing insurance policies, and restricted software entry come collectively to create a scalable, interpretable, and environment friendly agent system.
import sys
import subprocess
import pkgutil
def ensure_packages():
required = [
("openai", "openai>=1.40.0"),
("pandas", "pandas"),
("numpy", "numpy"),
("sklearn", "scikit-learn"),
("pydantic", "pydantic"),
("duckduckgo_search", "duckduckgo-search"),
("rich", "rich"),
]
lacking = []
for import_name, pip_name in required:
if pkgutil.find_loader(import_name) is None:
lacking.append(pip_name)
if lacking:
subprocess.check_call([sys.executable, "-m", "pip", "install", "-q"] + lacking)
ensure_packages()
import os
import io
import re
import json
import math
import time
import textwrap
import traceback
import contextlib
from dataclasses import dataclass, subject
from typing import Any, Dict, List, Optional, Callable, Tuple
import numpy as np
import pandas as pd
from openai import OpenAI
from pydantic import BaseModel, Field
from sklearn.feature_extraction.textual content import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from duckduckgo_search import DDGS
from wealthy.console import Console
from wealthy.panel import Panel
from wealthy.desk import Table
from wealthy.json import JSON as RichJSON
console = Console()
attempt:
from google.colab import userdata
OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
besides Exception:
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY", "")
if not OPENAI_API_KEY:
import getpass
OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
shopper = OpenAI(api_key=OPENAI_API_KEY)
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4.1-mini")
MAX_TOOL_CALLS = 3
MAX_WEB_RESULTS = 5
TOP_K_RETRIEVAL = 3
We start by checking and putting in all required Python packages so the tutorial runs easily in a single atmosphere. We then import the core libraries for knowledge dealing with, retrieval, structured schemas, internet search, and wealthy console show. We securely load the OpenAI API key, initialize the shopper, and outline world settings for the mannequin, software calls, internet outcomes, and retrieval depth.
class ToolSpec(BaseModel):
identify: str
description: str
input_schema: Dict[str, Any]
tags: List[str] = Field(default_factory=record)
class ToolName(BaseModel):
tool_name: str
arguments: Dict[str, Any]
class RouteDecision(BaseModel):
selected_tools: List[str]
rationale: str
policy_notes: List[str] = Field(default_factory=record)
class PlanOutput(BaseModel):
requires_tools: bool
tool_calls: List[ToolCall] = Field(default_factory=record)
direct_answer_allowed: bool = False
planner_note: str = ""
class ToolResult(BaseModel):
tool_name: str
okay: bool
output: Any
error: Optional[str] = None
LOCAL_DOCS = [
{
"id": "doc_001",
"title": "Model Context Protocol Basics",
"text": "Model Context Protocol standardizes how models connect to tools, resources, and prompts. A client can discover available tools from a server and invoke them using structured arguments."
},
{
"id": "doc_002",
"title": "Dynamic Capability Exposure",
"text": "Dynamic capability exposure means an agent does not always see every tool. A router can expose only the most relevant tools for a task, improving safety, reducing distraction, and lowering tool selection entropy."
},
{
"id": "doc_003",
"title": "Context Injection for Agents",
"text": "Context injection is the process of enriching the model prompt with selected tool descriptions, tool outputs, retrieved documents, prior summaries, and policy hints before the model generates a response."
},
{
"id": "doc_004",
"title": "Tool Discovery and MCP",
"text": "In MCP style systems, tool discovery usually begins with a tools listing step. Each tool includes a name, description, and input schema so the client knows how and when to call it."
},
{
"id": "doc_005",
"title": "Router Policies for Agents",
"text": "Routing policies can combine heuristics, learned scorers, confidence estimates, and LLM reasoning. A router may use task keywords, domain tags, or explicit constraints to decide which tools to expose."
},
{
"id": "doc_006",
"title": "Why Restrict Tool Access",
"text": "Restricting tool access helps minimize accidental misuse, improves reasoning focus, reduces latency, and creates a more interpretable planning process. This is especially helpful in multi-tool agent systems."
},
{
"id": "doc_007",
"title": "Dataset Loading for Rapid Analysis",
"text": "A dataset loader tool can let an agent inspect tabular data quickly. It is useful for classification tasks, summary statistics, schema exploration, and downstream code execution."
},
{
"id": "doc_008",
"title": "Python Sandboxes in Agent Systems",
"text": "Many advanced agents rely on code execution sandboxes for calculations, simulation, plotting, and dataframe inspection. Safe code execution typically uses restricted globals and output capture."
},
]
class NativeRetriever:
def __init__(self, docs: List[Dict[str, str]]):
self.docs = docs
self.vectorizer = TfidfVectorizer(stop_words="english")
self.doc_matrix = self.vectorizer.fit_transform([d["text"] for d in docs])
def search(self, question: str, top_k: int = 3) -> List[Dict[str, Any]]:
q_vec = self.vectorizer.rework([query])
sims = cosine_similarity(q_vec, self.doc_matrix)[0]
idxs = np.argsort(-sims)[:top_k]
outcomes = []
for i in idxs:
outcomes.append({
"id": self.docs[i]["id"],
"title": self.docs[i]["title"],
"textual content": self.docs[i]["text"],
"rating": float(sims[i]),
})
return outcomes
retriever = NativeRetriever(LOCAL_DOCS)
We outline structured Pydantic fashions to signify software specs, software calls, routing selections, planning outputs, and software leads to a clear MCP-style format. We then create a small native data base that explains ideas like MCP, dynamic functionality publicity, context injection, router insurance policies, and sandboxed execution. Finally, we constructed a TF-IDF-based native retriever that searches these paperwork and returns probably the most related snippets, alongside with their similarity scores.
def tool_web_search(question: str, max_results: int = 5) -> Dict[str, Any]:
outcomes = []
with DDGS() as ddgs:
for r in ddgs.textual content(question, max_results=max_results):
outcomes.append({
"title": r.get("title", ""),
"href": r.get("href", ""),
"physique": r.get("physique", ""),
})
return {"question": question, "outcomes": outcomes}
def tool_python_exec(code: str) -> Dict[str, Any]:
allowed_builtins = {
"abs": abs,
"all": all,
"any": any,
"bool": bool,
"dict": dict,
"enumerate": enumerate,
"float": float,
"int": int,
"len": len,
"record": record,
"max": max,
"min": min,
"print": print,
"vary": vary,
"spherical": spherical,
"set": set,
"sorted": sorted,
"str": str,
"sum": sum,
"tuple": tuple,
"zip": zip,
}
local_ns = {}
global_ns = {
"__builtins__": allowed_builtins,
"np": np,
"pd": pd,
"math": math,
}
stdout_buffer = io.StringIO()
attempt:
with contextlib.redirect_stdout(stdout_buffer):
exec(code, global_ns, local_ns)
return {
"stdout": stdout_buffer.getvalue(),
"locals": {okay: repr(v)[:500] for okay, v in local_ns.objects() if not okay.startswith("__")}
}
besides Exception as e:
return {
"stdout": stdout_buffer.getvalue(),
"error_type": kind(e).__name__,
"error_message": str(e),
"traceback": traceback.format_exc(restrict=2),
}
def load_builtin_dataset(identify: str = "iris", n_rows: int = 10) -> Dict[str, Any]:
from sklearn import datasets as sk_datasets
registry = {
"iris": sk_datasets.load_iris,
"wine": sk_datasets.load_wine,
"breast_cancer": sk_datasets.load_breast_cancer,
"diabetes": sk_datasets.load_diabetes,
}
if identify not in registry:
elevate ValueError(f"Unsupported dataset '{identify}'. Choose from {record(registry.keys())}")
ds = registry[name]()
feature_names = record(ds.feature_names)
df = pd.DataFrame(ds.knowledge, columns=feature_names)
if hasattr(ds, "goal"):
df["target"] = ds.goal
return {
"dataset_name": identify,
"form": record(df.form),
"columns": record(df.columns),
"preview": df.head(n_rows).to_dict(orient="data"),
"describe": df.describe(embrace="all").fillna("").to_dict(),
}
def tool_vector_retrieve(question: str, top_k: int = 3) -> Dict[str, Any]:
outcomes = retriever.search(question, top_k=top_k)
return {"question": question, "outcomes": outcomes}
We outline the principle instruments our MCP-style agent can use, together with internet search, secure Python execution, dataset loading, and native vector retrieval. We hold Python execution managed by limiting built-in capabilities and capturing printed output, native variables, and errors. We additionally be sure that the dataset and retrieval instruments return structured outputs so the agent can examine knowledge or retrieve related data earlier than producing a ultimate reply.
@dataclass
class MCPTool:
spec: ToolSpec
fn: Callable[..., Any]
class MCPToolServer:
def __init__(self):
self.instruments: Dict[str, MCPTool] = {}
def register_tool(self, spec: ToolSpec, fn: Callable[..., Any]):
self.instruments[spec.name] = MCPTool(spec=spec, fn=fn)
def tools_list(self) -> List[Dict[str, Any]]:
return [
{
"name": tool.spec.name,
"description": tool.spec.description,
"input_schema": tool.spec.input_schema,
"tags": tool.spec.tags,
}
for tool in self.tools.values()
]
def tools_call(self, tool_name: str, arguments: Dict[str, Any]) -> ToolResult:
if tool_name not in self.instruments:
return ToolResult(tool_name=tool_name, okay=False, output=None, error="Tool not discovered")
attempt:
output = self.instruments[tool_name].fn(**arguments)
return ToolResult(tool_name=tool_name, okay=True, output=output)
besides Exception as e:
return ToolResult(tool_name=tool_name, okay=False, output=None, error=f"{kind(e).__name__}: {str(e)}")
server = MCPToolServer()
server.register_tool(
ToolSpec(
identify="web_search",
description="Search the general public internet for current or normal data and return concise outcomes.",
input_schema={
"kind": "object",
"properties": {
"question": {"kind": "string"},
"max_results": {"kind": "integer", "default": 5}
},
"required": ["query"]
},
tags=["web", "search", "recent", "news", "research"]
),
tool_web_search
)
server.register_tool(
ToolSpec(
identify="python_exec",
description="Execute Python code for calculations, dataframe inspection, simulations, or transformations.",
input_schema={
"kind": "object",
"properties": {
"code": {"kind": "string"}
},
"required": ["code"]
},
tags=["python", "compute", "analysis", "code", "math"]
),
tool_python_exec
)
server.register_tool(
ToolSpec(
identify="vector_retrieve",
description="Retrieve related native data snippets from a vectorized tutorial corpus.",
input_schema={
"kind": "object",
"properties": {
"question": {"kind": "string"},
"top_k": {"kind": "integer", "default": 3}
},
"required": ["query"]
},
tags=["retrieval", "memory", "knowledge", "vector", "rag"]
),
tool_vector_retrieve
)
server.register_tool(
ToolSpec(
identify="dataset_loader",
description="Load a built-in tabular dataset and return schema, preview, and abstract statistics.",
input_schema={
"kind": "object",
"properties": {
"identify": {"kind": "string", "enum": ["iris", "wine", "breast_cancer", "diabetes"]},
"n_rows": {"kind": "integer", "default": 10}
},
"required": ["name"]
},
tags=["dataset", "tabular", "data", "analysis", "ml"]
),
load_builtin_dataset
)
We create an MCP-style software server that shops every software with its schema, description, tags, and callable operate. We add strategies for itemizing out there instruments and calling a particular software with structured arguments whereas safely returning success or error outputs. We then register internet search, Python execution, vector retrieval, and dataset loading as discoverable instruments that the routed agent can use later.
def extract_json_object(textual content: str) -> Dict[str, Any]:
textual content = textual content.strip()
attempt:
return json.hundreds(textual content)
besides Exception:
match = re.search(r"{.*}", textual content, flags=re.DOTALL)
if not match:
elevate ValueError("No JSON object present in mannequin output")
return json.hundreds(match.group(0))
def llm_json(directions: str, user_prompt: str) -> Dict[str, Any]:
resp = shopper.responses.create(
mannequin=MODEL,
enter=user_prompt,
directions=directions,
temperature=0
)
return extract_json_object(resp.output_text)
def pretty_tools_table(instruments: List[Dict[str, Any]], title: str):
desk = Table(title=title)
desk.add_column("Tool")
desk.add_column("Tags")
desk.add_column("Description")
for t in instruments:
desk.add_row(t["name"], ", ".be part of(t.get("tags", [])), t["description"])
console.print(desk)
class HybridMCPRouter:
def __init__(self, server: MCPToolServer, mannequin: str):
self.server = server
self.mannequin = mannequin
def heuristic_scores(self, activity: str) -> Dict[str, float]:
task_l = activity.decrease()
scores = {identify: 0.0 for identify in self.server.instruments.keys()}
keyword_map = {
"web_search": ["latest", "recent", "search", "find", "news", "paper", "web", "look up", "internet"],
"python_exec": ["calculate", "compute", "plot", "simulate", "code", "python", "average", "math"],
"vector_retrieve": ["mcp", "memory", "retrieve", "context", "router", "knowledge", "protocol"],
"dataset_loader": ["dataset", "data", "iris", "wine", "breast cancer", "diabetes", "rows", "columns"],
}
for tool_name, kws in keyword_map.objects():
for kw in kws:
if kw in task_l:
scores[tool_name] += 1.0
if "examine" in task_l or "analyze" in task_l or "abstract" in task_l:
scores["python_exec"] += 0.5
scores["dataset_loader"] += 0.5
if "tutorial" in task_l or "mcp" in task_l or "routing" in task_l:
scores["vector_retrieve"] += 1.0
return scores
def shortlist(self, activity: str, top_n: int = 3) -> List[Dict[str, Any]]:
instruments = self.server.tools_list()
scores = self.heuristic_scores(activity)
ranked = sorted(instruments, key=lambda x: scores.get(x["name"], 0.0), reverse=True)
prime = [t for t in ranked if scores.get(t["name"], 0.0) > 0][:top_n]
if not prime:
prime = ranked[:top_n]
return prime
def route(self, activity: str) -> RouteDecision:
all_tools = self.server.tools_list()
shortlisted = self.shortlist(activity, top_n=3)
directions = """
You are a routing controller for an MCP-like agent system.
Your job is to determine which instruments ought to be uncovered to the downstream agent for this activity.
Expose solely instruments which are related.
Return strict JSON solely with keys:
selected_tools: array of software names
rationale: string
policy_notes: array of strings
Rules:
- Prefer minimal publicity.
- Do not expose greater than 3 instruments.
- Use software descriptions and tags.
- If current data is required, embrace web_search.
- If the duty entails native conceptual retrieval, embrace vector_retrieve.
- If the duty requires computation or tabular evaluation, embrace python_exec or dataset_loader as wanted.
"""
immediate = f"""
TASK:
{activity}
ALL TOOLS:
{json.dumps(all_tools, indent=2)}
HEURISTIC SHORTLIST:
{json.dumps(shortlisted, indent=2)}
Return JSON solely.
"""
obj = llm_json(directions, immediate)
selected_tools = obj.get("selected_tools", [])
selected_tools = [t for t in selected_tools if t in self.server.tools]
if not selected_tools:
selected_tools = [t["name"] for t in shortlisted]
return RouteDecision(
selected_tools=selected_tools[:3],
rationale=obj.get("rationale", ""),
policy_notes=obj.get("policy_notes", []),
)
We add helper capabilities to extract clear JSON from mannequin outputs, name the LLM in a structured means, and show uncovered instruments in a readable desk. We then construct a hybrid MCP router that first scores instruments utilizing keyword-based heuristics and creates a brief record of doubtless related instruments. Finally, we ask the LLM to make the ultimate routing choice so solely probably the most helpful instruments are uncovered to the downstream agent.
class RoutedAgent:
def __init__(self, server: MCPToolServer, router: HybridMCPRouter, mannequin: str):
self.server = server
self.router = router
self.mannequin = mannequin
def discover_exposed_tools(self, exposed_tool_names: List[str]) -> List[Dict[str, Any]]:
return [t for t in self.server.tools_list() if t["name"] in exposed_tool_names]
def plan(self, activity: str, exposed_tools: List[Dict[str, Any]]) -> PlanOutput:
directions = """
You are a planning agent in an MCP-like structure.
You can solely use the uncovered instruments.
Decide whether or not instruments are wanted.
Return strict JSON solely with keys:
requires_tools: boolean
tool_calls: array of objects with tool_name and arguments
direct_answer_allowed: boolean
planner_note: string
Rules:
- Use at most 3 software calls.
- Only name instruments from the uncovered record.
- Arguments should match every software's enter schema conceptually.
- Prefer calling vector_retrieve for conceptual native data.
- Prefer calling web_search for current or exterior data.
- Prefer dataset_loader if the consumer asks a few named built-in dataset.
- Prefer python_exec solely when computation or code execution is genuinely helpful.
- Do not fabricate unavailable instruments.
"""
immediate = f"""
USER TASK:
{activity}
EXPOSED TOOLS:
{json.dumps(exposed_tools, indent=2)}
Return JSON solely.
"""
obj = llm_json(directions, immediate)
raw_tool_calls = obj.get("tool_calls", [])
parsed_calls = []
allowed = {t["name"] for t in exposed_tools}
for name in raw_tool_calls[:MAX_TOOL_CALLS]:
identify = name.get("tool_name", "")
args = name.get("arguments", {})
if identify in allowed and isinstance(args, dict):
parsed_calls.append(ToolName(tool_name=identify, arguments=args))
return PlanOutput(
requires_tools=bool(obj.get("requires_tools", False) or parsed_calls),
tool_calls=parsed_calls,
direct_answer_allowed=bool(obj.get("direct_answer_allowed", False)),
planner_note=obj.get("planner_note", ""),
)
def run_tools(self, tool_calls: List[ToolCall]) -> List[ToolResult]:
outcomes = []
for tc in tool_calls:
end result = self.server.tools_call(tc.tool_name, tc.arguments)
outcomes.append(end result)
return outcomes
def reply(self, activity: str, route: RouteDecision, exposed_tools: List[Dict[str, Any]], plan: PlanOutput, outcomes: List[ToolResult]) -> str:
directions = """
You are the ultimate answering agent in an MCP-style routed software system.
Use the routed instruments and returned software outputs to reply the consumer.
Be concrete, concise, and technically appropriate.
If software outputs are partial, say so.
Do not point out hidden instruments that weren't uncovered.
"""
tool_result_payload = [r.model_dump() for r in results]
immediate = f"""
USER TASK:
{activity}
ROUTE DECISION:
{route.model_dump_json(indent=2)}
EXPOSED TOOLS:
{json.dumps(exposed_tools, indent=2)}
PLAN:
{plan.model_dump_json(indent=2)}
TOOL RESULTS:
{json.dumps(tool_result_payload, indent=2)}
Now reply the consumer clearly.
"""
resp = shopper.responses.create(
mannequin=self.mannequin,
enter=immediate,
directions=directions,
temperature=0.2
)
return resp.output_text
def run(self, activity: str, verbose: bool = True) -> Dict[str, Any]:
route = self.router.route(activity)
exposed_tools = self.discover_exposed_tools(route.selected_tools)
plan = self.plan(activity, exposed_tools)
outcomes = self.run_tools(plan.tool_calls) if plan.requires_tools else []
final_answer = self.reply(activity, route, exposed_tools, plan, outcomes)
payload = {
"activity": activity,
"route_decision": route.model_dump(),
"exposed_tools": exposed_tools,
"plan": plan.model_dump(),
"tool_results": [r.model_dump() for r in results],
"final_answer": final_answer,
}
if verbose:
console.print(Panel.match(f"USER TASKn{activity}", title="Input"))
pretty_tools_table(exposed_tools, "Tools Exposed By MCP Router")
console.print(Panel(route.rationale or "No rationale supplied", title="Router Rationale"))
if route.policy_notes:
console.print(Panel("n".be part of(f"- {x}" for x in route.policy_notes), title="Policy Notes"))
console.print(Panel(plan.planner_note or "No planner observe supplied", title="Planner Note"))
if outcomes:
for r in outcomes:
console.print(Panel.match(RichJSON.from_data(r.model_dump()), title=f"Tool Result: {r.tool_name}"))
console.print(Panel(final_answer, title="Final Answer"))
return payload
def mcp_jsonrpc_tools_list(server: MCPToolServer) -> Dict[str, Any]:
return {
"jsonrpc": "2.0",
"id": 1,
"end result": {
"instruments": server.tools_list()
}
}
def mcp_jsonrpc_tools_call(server: MCPToolServer, tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
end result = server.tools_call(tool_name, arguments)
return {
"jsonrpc": "2.0",
"id": 2,
"end result": end result.model_dump()
}
router = HybridMCPRouter(server=server, mannequin=MODEL)
agent = RoutedAgent(server=server, router=router, mannequin=MODEL)
console.print(Panel.match("MCP-STYLE TOOL DISCOVERY", title="Step 1"))
console.print(RichJSON.from_data(mcp_jsonrpc_tools_list(server)))
demo_tasks = [
"Explain how an MCP tool router should expose tools for an agent task about dynamic capability exposure.",
"Search the web for recent examples of MCP-related developments and summarize them.",
"Load the iris dataset, inspect its columns and basic stats, and tell me what kind of ML problem it is.",
"Retrieve local knowledge about context injection and router policies, then explain why restricting tool access helps agent performance.",
"Use Python to compute the average of [3, 5, 9, 10, 13] and then clarify whether or not python execution was actually essential.",
]
all_runs = []
for idx, activity in enumerate(demo_tasks, begin=1):
console.print(Panel.match(f"DEMO RUN {idx}", title="=" * 10))
out = agent.run(activity, verbose=True)
all_runs.append(out)
custom_task = "Design a routed MCP workflow for an AI analysis assistant that ought to use retrieval for native protocol data and internet search solely when the duty explicitly asks for current data."
custom_run = agent.run(custom_task, verbose=True)
print("nPROGRAMMATIC EXAMPLE: instruments/record")
print(json.dumps(mcp_jsonrpc_tools_list(server), indent=2))
print("nPROGRAMMATIC EXAMPLE: instruments/name for vector_retrieve")
print(json.dumps(mcp_jsonrpc_tools_call(server, "vector_retrieve", {"question": "dynamic functionality publicity in MCP routers", "top_k": 2}), indent=2))
print("nPROGRAMMATIC EXAMPLE: instruments/name for dataset_loader")
print(json.dumps(mcp_jsonrpc_tools_call(server, "dataset_loader", {"identify": "iris", "n_rows": 5}), indent=2))
print("nPROGRAMMATIC EXAMPLE: customized ultimate reply")
print(custom_run["final_answer"])
We construct the routed agent that discovers solely the uncovered instruments, asks the planner whether or not software calls are wanted, runs these instruments, and then generates the ultimate reply from the route, plan, and software outputs. We additionally add JSON-RPC-style instruments/record and instruments/name examples to mirror how MCP shoppers work together with a software server. Also, we run a number of demo duties to present how the agent handles retrieval, internet search, dataset loading, Python execution, and a customized MCP workflow end-to-end.
In conclusion, we applied an end-to-end MCP-style structure the place software discovery, routing, planning, and execution work collectively seamlessly to resolve numerous duties. We noticed that dynamic functionality publicity improves each effectivity and security by limiting the agent’s entry to solely related instruments, whereas structured planning ensures managed, interpretable reasoning. Through a number of demonstrations, we noticed how the system adapts to totally different downside varieties, whether or not retrieval, computation, or real-time search, by intelligently choosing and utilizing instruments. Also, we will prolong this framework with extra superior routing insurance policies, extra reminiscence layers, or specialised instruments, thereby offering a robust basis for constructing production-grade AI assistants.
Check out the Full Codes with Notebook here. Also, be at liberty to observe us on Twitter and don’t overlook to be part of our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Need to associate with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and many others.? Connect with us
The put up How to Build an MCP Style Routed AI Agent System with Dynamic Tool Exposure Planning, Execution, and Context Injection appeared first on MarkTechPost.
