|

How to Build Production Ready AgentScope Workflows with ReAct Agents, Custom Tools, Multi-Agent Debate, Structured Output and Concurrent Pipelines

✅

In this tutorial, we construct a whole AgentScope workflow from the bottom up and run every part in Colab. We begin by wiring OpenAI by way of AgentScope and validating a fundamental mannequin name to perceive how messages and responses are dealt with. From there, we outline customized software features, register them in a toolkit, and examine the auto-generated schemas to see how instruments are uncovered to the agent. We then transfer right into a ReAct-based agent that dynamically decides when to name instruments, adopted by a multi-agent debate setup utilizing MsgHub to simulate structured interplay between brokers. Finally, we implement structured outputs with Pydantic and execute a concurrent multi-agent pipeline wherein a number of specialists analyze an issue in parallel, and a synthesiser combines their insights.

import subprocess, sys


subprocess.check_call([
   sys.executable, "-m", "pip", "install", "-q",
   "agentscope", "openai", "pydantic", "nest_asyncio",
])


print("✅  All packages put in.n")


import nest_asyncio
nest_asyncio.apply()


import asyncio
import json
import getpass
import math
import datetime
from typing import Any


from pydantic import BaseModel, Field


from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter
from agentscope.reminiscence import InMemoryMemory
from agentscope.message import Msg, TextBlock, ToolUseBlock
from agentscope.mannequin import OpenAIChatModel
from agentscope.pipeline import MsgHub, sequential_pipeline
from agentscope.software import Toolkit, ToolResponse


OPENAI_API_KEY = getpass.getpass("🔑  Enter your OpenAI API key: ")
MODEL_NAME = "gpt-4o-mini"


print(f"n✅  API key captured. Using mannequin: {MODEL_NAME}n")
print("=" * 72)


def make_model(stream: bool = False) -> OpenAIChatModel:
   return OpenAIChatModel(
       model_name=MODEL_NAME,
       api_key=OPENAI_API_KEY,
       stream=stream,
       generate_kwargs={"temperature": 0.7, "max_tokens": 1024},
   )


print("n" + "═" * 72)
print("  PART 1: Basic Model Call")
print("═" * 72)


async def part1_basic_model_call():
   mannequin = make_model()
   response = await mannequin(
       messages=[{"role": "user", "content": "What is AgentScope in one sentence?"}],
   )
   textual content = response.content material[0]["text"]
   print(f"n🤖  Model says: {textual content}")
   print(f"📊  Tokens used: {response.utilization}")


asyncio.run(part1_basic_model_call())

We set up all required dependencies and patch the occasion loop to guarantee asynchronous code runs easily in Colab. We securely seize the OpenAI API key and configure the mannequin by way of a helper operate for reuse. We then run a fundamental mannequin name to confirm the setup and examine the response and token utilization.

print("n" + "═" * 72)
print("  PART 2: Custom Tool Functions & Toolkit")
print("═" * 72)


async def calculate_expression(expression: str) -> ToolResponse:
   allowed = {
       "abs": abs, "spherical": spherical, "min": min, "max": max,
       "sum": sum, "pow": pow, "int": int, "float": float,
       "sqrt": math.sqrt, "pi": math.pi, "e": math.e,
       "log": math.log, "sin": math.sin, "cos": math.cos,
       "tan": math.tan, "factorial": math.factorial,
   }
   attempt:
       end result = eval(expression, {"__builtins__": {}}, allowed)
       return ToolResponse(content material=[TextBlock(type="text", text=str(result))])
   besides Exception as exc:
       return ToolResponse(content material=[TextBlock(type="text", text=f"Error: {exc}")])


async def get_current_datetime(timezone_offset: int = 0) -> ToolResponse:
   now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))
   return ToolResponse(
       content material=[TextBlock(type="text", text=now.strftime("%Y-%m-%d %H:%M:%S %Z"))],
   )


toolkit = Toolkit()
toolkit.register_tool_function(calculate_expression)
toolkit.register_tool_function(get_current_datetime)


schemas = toolkit.get_json_schemas()
print("n📋  Auto-generated software schemas:")
print(json.dumps(schemas, indent=2))


async def part2_test_tool():
   result_gen = await toolkit.call_tool_function(
       ToolUseBlock(
           sort="tool_use", id="test-1",
           identify="calculate_expression",
           enter={"expression": "factorial(10)"},
       ),
   )
   async for resp in result_gen:
       print(f"n🔧  Tool end result for factorial(10): {resp.content material[0]['text']}")


asyncio.run(part2_test_tool())

We outline customized software features for mathematical analysis and datetime retrieval utilizing managed execution. We register these instruments right into a toolkit and examine their auto-generated JSON schemas to perceive how AgentScope exposes them. We then simulate a direct software name to validate that the software execution pipeline works appropriately.

print("n" + "═" * 72)
print("  PART 3: ReAct Agent with Tools")
print("═" * 72)


async def part3_react_agent():
   agent = ReActAgent(
       identify="MathBot",
       sys_prompt=(
           "You are MathBot, a useful assistant that solves math issues. "
           "Use the calculate_expression software for any computation. "
           "Use get_current_datetime when requested concerning the time."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
       toolkit=toolkit,
       max_iters=5,
   )


   queries = [
       "What's the current time in UTC+5?",
   ]
   for q in queries:
       print(f"n👤  User: {q}")
       msg = Msg("person", q, "person")
       response = await agent(msg)
       print(f"🤖  MathBot: {response.get_text_content()}")
       agent.reminiscence.clear()


asyncio.run(part3_react_agent())


print("n" + "═" * 72)
print("  PART 4: Multi-Agent Debate (MsgHub)")
print("═" * 72)


DEBATE_TOPIC = (
   "Should synthetic normal intelligence (AGI) analysis be open-sourced, "
   "or ought to it stay behind closed doorways at main labs?"
)

We assemble a ReAct agent that causes about when to use instruments and dynamically executes them. We go person queries and observe how the agent combines reasoning with software utilization to produce solutions. We additionally reset reminiscence between queries to guarantee impartial and clear interactions.

async def part4_debate():
   proponent = ReActAgent(
       identify="Proponent",
       sys_prompt=(
           f"You are the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI analysis. "
           f"Topic: {DEBATE_TOPIC}n"
           "Keep every response to 2-3 concise paragraphs. Address the opposite aspect's factors immediately."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   opponent = ReActAgent(
       identify="Opponent",
       sys_prompt=(
           f"You are the Opponent in a debate. You argue AGAINST open-sourcing AGI analysis. "
           f"Topic: {DEBATE_TOPIC}n"
           "Keep every response to 2-3 concise paragraphs. Address the opposite aspect's factors immediately."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   num_rounds = 2
   for rnd in vary(1, num_rounds + 1):
       print(f"n{'─' * 60}")
       print(f"  ROUND {rnd}")
       print(f"{'─' * 60}")


       async with MsgHub(
           individuals=[proponent, opponent],
           announcement=Msg("Moderator", f"Round {rnd} — start. Topic: {DEBATE_TOPIC}", "assistant"),
       ):
           pro_msg = await proponent(
               Msg("Moderator", "Proponent, please current your argument.", "person"),
           )
           print(f"n✅  Proponent:n{pro_msg.get_text_content()}")


           opp_msg = await opponent(
               Msg("Moderator", "Opponent, please reply and current your counter-argument.", "person"),
           )
           print(f"n❌  Opponent:n{opp_msg.get_text_content()}")


   print(f"n{'─' * 60}")
   print("  DEBATE COMPLETE")
   print(f"{'─' * 60}")


asyncio.run(part4_debate())


print("n" + "═" * 72)
print("  PART 5: Structured Output with Pydantic")
print("═" * 72)


class MovieReview(BaseModel):
   yr: int = Field(description="The launch yr.")
   style: str = Field(description="Primary style of the film.")
   ranking: float = Field(description="Rating from 0.0 to 10.0.")
   execs: checklist[str] = Field(description="List of 2-3 strengths of the film.")
   cons: checklist[str] = Field(description="List of 1-2 weaknesses of the film.")
   verdict: str = Field(description="A one-sentence ultimate verdict.")

We create two brokers with opposing roles and join them utilizing MsgHub for a structured multi-agent debate. We simulate a number of rounds wherein every agent responds to the others whereas sustaining context by way of shared communication. We observe how agent coordination permits coherent argument change throughout turns.

async def part5_structured_output():
   agent = ReActAgent(
       identify="Critic",
       sys_prompt="You are knowledgeable film critic. When requested to evaluation a film, present an intensive evaluation.",
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
   )


   msg = Msg("person", "Review the film 'Inception' (2010) by Christopher Nolan.", "person")
   response = await agent(msg, structured_model=MovieReview)


   print("n🎬  Structured Movie Review:")
   print(f"    Title   : {response.metadata.get('title', 'N/A')}")
   print(f"    Year    : {response.metadata.get('yr', 'N/A')}")
   print(f"    Genre   : {response.metadata.get('style', 'N/A')}")
   print(f"    Rating  : {response.metadata.get('ranking', 'N/A')}/10")
   execs = response.metadata.get('execs', [])
   cons = response.metadata.get('cons', [])
   if execs:
       print(f"    Pros    : {', '.be part of(str(p) for p in execs)}")
   if cons:
       print(f"    Cons    : {', '.be part of(str(c) for c in cons)}")
   print(f"    Verdict : {response.metadata.get('verdict', 'N/A')}")


   print(f"n📝  Full textual content response:n{response.get_text_content()}")


asyncio.run(part5_structured_output())


print("n" + "═" * 72)
print("  PART 6: Concurrent Multi-Agent Pipeline")
print("═" * 72)


async def part6_concurrent_agents():
   specialists = {
       "Economist": "You are an economist. Analyze the given matter from an financial perspective in 2-3 sentences.",
       "Ethicist": "You are an ethicist. Analyze the given matter from an moral perspective in 2-3 sentences.",
       "Technologist": "You are a technologist. Analyze the given matter from a know-how perspective in 2-3 sentences.",
   }


   brokers = []
   for identify, immediate in specialists.objects():
       brokers.append(
           ReActAgent(
               identify=identify,
               sys_prompt=immediate,
               mannequin=make_model(),
               reminiscence=InMemoryMemory(),
               formatter=OpenAIChatFormatter(),
           )
       )


   topic_msg = Msg(
       "person",
       "Analyze the influence of huge language fashions on the worldwide workforce.",
       "person",
   )


   print("n⏳  Running 3 specialist brokers concurrently...")
   outcomes = await asyncio.collect(*(agent(topic_msg) for agent in brokers))


   for agent, lead to zip(brokers, outcomes):
       print(f"n🧠  {agent.identify}:n{end result.get_text_content()}")


   synthesiser = ReActAgent(
       identify="Synthesiser",
       sys_prompt=(
           "You are a synthesiser. You obtain analyses from an Economist, "
           "an Ethicist, and a Technologist. Combine their views into "
           "a single coherent abstract of 3-4 sentences."
       ),
       mannequin=make_model(),
       reminiscence=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   combined_text = "nn".be part of(
       f"[{agent.name}]: {r.get_text_content()}" for agent, r in zip(brokers, outcomes)
   )
   synthesis = await synthesiser(
       Msg("person", f"Here are the specialist analyses:nn{combined_text}nnPlease synthesise.", "person"),
   )
   print(f"n🔗  Synthesised Summary:n{synthesis.get_text_content()}")


asyncio.run(part6_concurrent_agents())


print("n" + "═" * 72)
print("  🎉  TUTORIAL COMPLETE!")
print("  You have lined:")
print("    1. Basic mannequin calls with OpenAIChatModel")
print("    2. Custom software features & auto-generated JSON schemas")
print("    3. ReAct Agent with software use")
print("    4. Multi-agent debate with MsgHub")
print("    5. Structured output with Pydantic fashions")
print("    6. Concurrent multi-agent pipelines")
print("═" * 72)

We implement structured outputs utilizing a Pydantic schema to extract constant fields from mannequin responses. We then construct a concurrent multi-agent pipeline the place a number of specialist brokers analyze a subject in parallel. Finally, we combination their outputs utilizing a synthesiser agent to produce a unified and coherent abstract.

In conclusion, we’ve applied a full-stack agentic system that goes past easy prompting and into orchestrated reasoning, software utilization, and collaboration. We now perceive how AgentScope manages reminiscence, formatting, and software execution beneath the hood, and how ReAct brokers bridge reasoning with motion. We additionally noticed how multi-agent methods could be coordinated each sequentially and concurrently, and how structured outputs guarantee reliability in downstream purposes. With these constructing blocks, we’re able to design extra superior agent architectures, prolong software ecosystems, and deploy scalable, production-ready AI methods.


Check out the Full Notebook here.  Also, be at liberty to comply with us on Twitter and don’t neglect to be part of our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The submit How to Build Production Ready AgentScope Workflows with ReAct Agents, Custom Tools, Multi-Agent Debate, Structured Output and Concurrent Pipelines appeared first on MarkTechPost.

Similar Posts