How I Built an Intelligent Multi-Agent Systems with AutoGen, LangChain, and Hugging Face to Demonstrate Practical Agentic AI Workflows

In this tutorial, we dive into the essence of Agentic AI by uniting LangChain, AutoGen, and Hugging Face right into a single, totally useful framework that runs with out paid APIs. We start by organising a light-weight open-source pipeline and then progress via structured reasoning, multi-step workflows, and collaborative agent interactions. As we transfer from LangChain chains to simulated multi-agent techniques, we expertise how reasoning, planning, and execution can seamlessly mix to kind autonomous, clever habits, fully inside our management and setting. Check out the FULL CODES here.
import warnings
warnings.filterwarnings('ignore')
from typing import List, Dict
import autogen
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain_community.llms import HuggingFacePipeline
from transformers import pipeline
import json
print("
Loading fashions...n")
pipe = pipeline(
"text2text-generation",
mannequin="google/flan-t5-base",
max_length=200,
temperature=0.7
)
llm = HuggingFacePipeline(pipeline=pipe)
print("✓ Models loaded!n")
We begin by organising the environment and bringing in all the required libraries. We initialize a Hugging Face FLAN-T5 pipeline as our native language mannequin, making certain it will possibly generate coherent, contextually wealthy textual content. We affirm that all the pieces hundreds efficiently, laying the groundwork for the agentic experiments that comply with. Check out the FULL CODES here.
def demo_langchain_basics():
print("="*70)
print("DEMO 1: LangChain - Intelligent Prompt Chains")
print("="*70 + "n")
immediate = PromptTemplate(
input_variables=["task"],
template="Task: {job}nnProvide an in depth step-by-step answer:"
)
chain = LLMChain(llm=llm, immediate=immediate)
job = "Create a Python operate to calculate fibonacci sequence"
print(f"Task: {job}n")
outcome = chain.run(job=job)
print(f"LangChain Response:n{outcome}n")
print("✓ LangChain demo completen")
def demo_langchain_multi_step():
print("="*70)
print("DEMO 2: LangChain - Multi-Step Reasoning")
print("="*70 + "n")
planner = PromptTemplate(
input_variables=["goal"],
template="Break down this objective into 3 steps: {objective}"
)
executor = PromptTemplate(
input_variables=["step"],
template="Explain how to execute this step: {step}"
)
plan_chain = LLMChain(llm=llm, immediate=planner)
exec_chain = LLMChain(llm=llm, immediate=executor)
objective = "Build a machine studying mannequin"
print(f"Goal: {objective}n")
plan = plan_chain.run(objective=objective)
print(f"Plan:n{plan}n")
print("Executing first step...")
execution = exec_chain.run(step="Collect and put together information")
print(f"Execution:n{execution}n")
print("✓ Multi-step reasoning completen")
We discover LangChain’s capabilities by establishing clever immediate templates that permit our mannequin to motive via duties. We construct each a easy one-step chain and a multi-step reasoning circulate that break complicated objectives into clear subtasks. We observe how LangChain allows structured considering and turns plain directions into detailed, actionable responses. Check out the FULL CODES here.
class SimpleAgent:
def __init__(self, title: str, function: str, llm_pipeline):
self.title = title
self.function = function
self.pipe = llm_pipeline
self.reminiscence = []
def course of(self, message: str) -> str:
immediate = f"You are a {self.function}.nUser: {message}nYour response:"
response = self.pipe(immediate, max_length=150)[0]['generated_text']
self.reminiscence.append({"consumer": message, "agent": response})
return response
def __repr__(self):
return f"Agent({self.title}, function={self.function})"
def demo_simple_agents():
print("="*70)
print("DEMO 3: Simple Multi-Agent System")
print("="*70 + "n")
researcher = SimpleAgent("Researcher", "analysis specialist", pipe)
coder = SimpleAgent("Coder", "Python developer", pipe)
reviewer = SimpleAgent("Reviewer", "code reviewer", pipe)
print("Agents created:", researcher, coder, reviewer, "n")
job = "Create a operate to type an inventory"
print(f"Task: {job}n")
print(f"[{researcher.name}] Researching...")
analysis = researcher.course of(f"What's the most effective strategy to: {job}")
print(f"Research: {analysis[:100]}...n")
print(f"[{coder.name}] Coding...")
code = coder.course of(f"Write Python code to: {job}")
print(f"Code: {code[:100]}...n")
print(f"[{reviewer.name}] Reviewing...")
evaluate = reviewer.course of(f"Review this strategy: {code[:50]}")
print(f"Review: {evaluate[:100]}...n")
print("✓ Multi-agent workflow completen")
We design light-weight brokers powered by the identical Hugging Face pipeline, every assigned a particular function, corresponding to researcher, coder, or reviewer. We let these brokers collaborate on a easy coding job, exchanging data and constructing upon one another’s outputs. We witness how a coordinated multi-agent workflow can emulate teamwork, creativity, and self-organization in an automated setting. Check out the FULL CODES here.
def demo_autogen_conceptual():
print("="*70)
print("DEMO 4: AutoGen Concepts (Conceptual Demo)")
print("="*70 + "n")
agent_config = {
"brokers": [
{"name": "UserProxy", "type": "user_proxy", "role": "Coordinates tasks"},
{"name": "Assistant", "type": "assistant", "role": "Solves problems"},
{"name": "Executor", "type": "executor", "role": "Runs code"}
],
"workflow": [
"1. UserProxy receives task",
"2. Assistant generates solution",
"3. Executor tests solution",
"4. Feedback loop until complete"
]
}
print(json.dumps(agent_config, indent=2))
print("n
AutoGen Key Features:")
print(" • Automated agent chat conversations")
print(" • Code execution capabilities")
print(" • Human-in-the-loop assist")
print(" • Multi-agent collaboration")
print(" • Tool/operate callingn")
print("✓ AutoGen ideas explainedn")
class MockLLM:
def __init__(self):
self.responses = {
"code": "def fibonacci(n):n if n <= 1:n return nn return fibonacci(n-1) + fibonacci(n-2)",
"clarify": "This is a recursive implementation of the Fibonacci sequence.",
"evaluate": "The code is appropriate however could possibly be optimized with memoization.",
"default": "I perceive. Let me assist with that job."
}
def generate(self, immediate: str) -> str:
prompt_lower = immediate.decrease()
if "code" in prompt_lower or "operate" in prompt_lower:
return self.responses["code"]
elif "clarify" in prompt_lower:
return self.responses["explain"]
elif "evaluate" in prompt_lower:
return self.responses["review"]
return self.responses["default"]
def demo_autogen_with_mock():
print("="*70)
print("DEMO 5: AutoGen with Custom LLM Backend")
print("="*70 + "n")
mock_llm = MockLLM()
dialog = [
("User", "Create a fibonacci function"),
("CodeAgent", mock_llm.generate("write code for fibonacci")),
("ReviewAgent", mock_llm.generate("review this code")),
]
print("Simulated AutoGen Multi-Agent Conversation:n")
for speaker, message in dialog:
print(f"[{speaker}]")
print(f"{message}n")
print("✓ AutoGen simulation completen")
We illustrate AutoGen’s core thought by defining a conceptual configuration of brokers and their workflow. We then simulate an AutoGen-style dialog utilizing a customized mock LLM that generates real looking but controllable responses. We understand how this framework permits a number of brokers to motive, check, and refine concepts collaboratively with out counting on any exterior APIs. Check out the FULL CODES here.
def demo_hybrid_system():
print("="*70)
print("DEMO 6: Hybrid LangChain + Multi-Agent System")
print("="*70 + "n")
reasoning_prompt = PromptTemplate(
input_variables=["problem"],
template="Analyze this drawback: {drawback}nWhat are the important thing steps?"
)
reasoning_chain = LLMChain(llm=llm, immediate=reasoning_prompt)
planner = SimpleAgent("Planner", "strategic planner", pipe)
executor = SimpleAgent("Executor", "job executor", pipe)
drawback = "Optimize a sluggish database question"
print(f"Problem: {drawback}n")
print("[LangChain] Analyzing drawback...")
evaluation = reasoning_chain.run(drawback=drawback)
print(f"Analysis: {evaluation[:120]}...n")
print(f"[{planner.name}] Creating plan...")
plan = planner.course of(f"Plan how to: {drawback}")
print(f"Plan: {plan[:120]}...n")
print(f"[{executor.name}] Executing...")
outcome = executor.course of(f"Execute: Add database indexes")
print(f"Result: {outcome[:120]}...n")
print("✓ Hybrid system completen")
if __name__ == "__main__":
print("="*70)
print("
ADVANCED AGENTIC AI TUTORIAL")
print("AutoGen + LangChain + HuggingFace")
print("="*70 + "n")
demo_langchain_basics()
demo_langchain_multi_step()
demo_simple_agents()
demo_autogen_conceptual()
demo_autogen_with_mock()
demo_hybrid_system()
print("="*70)
print("
TUTORIAL COMPLETE!")
print("="*70)
print("n
What You Learned:")
print(" ✓ LangChain immediate engineering and chains")
print(" ✓ Multi-step reasoning with LangChain")
print(" ✓ Building customized multi-agent techniques")
print(" ✓ AutoGen structure and ideas")
print(" ✓ Combining LangChain + brokers")
print(" ✓ Using HuggingFace fashions (no API wanted!)")
print("n
Key Takeaway:")
print(" You can construct highly effective agentic AI techniques with out costly APIs!")
print(" Combine LangChain's chains with multi-agent architectures for")
print(" clever, autonomous AI techniques.")
print("="*70 + "n")
We mix LangChain’s structured reasoning with our easy agentic system to create a hybrid clever framework. We permit LangChain to analyze issues whereas the brokers plan and execute corresponding actions in sequence. We conclude the demonstration by operating all modules collectively, showcasing how open-source instruments can combine seamlessly to construct adaptive, autonomous AI techniques.
In conclusion, we witness how Agentic AI transforms from idea to actuality via a easy, modular design. We mix the reasoning depth of LangChain with the cooperative energy of brokers to construct adaptable techniques that suppose, plan, and act independently. The result’s a transparent demonstration that highly effective, autonomous AI techniques may be constructed with out costly infrastructure, leveraging open-source instruments, artistic design, and a little bit of experimentation.
Check out the FULL CODES here. Feel free to try our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to comply with us on Twitter and don’t neglect to be a part of our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The put up How I Built an Intelligent Multi-Agent Systems with AutoGen, LangChain, and Hugging Face to Demonstrate Practical Agentic AI Workflows appeared first on MarkTechPost.