|

Meet LangChain’s DeepAgents Library and a Practical Example to See How DeepAgents Actually Work in Action

While a primary Large Language Model (LLM) agent—one which repeatedly calls exterior instruments—is simple to create, these brokers usually battle with lengthy and complicated duties as a result of they lack the flexibility to plan forward and handle their work over time. They will be thought-about “shallow” in their execution.

The deepagents library is designed to overcome this limitation by implementing a common structure impressed by superior functions like Deep Research and Claude Code.

This structure offers brokers extra depth by combining 4 key options:

  • A Planning Tool: Allows the agent to strategically break down a complicated process into manageable steps earlier than appearing.
  • Sub-Agents: Enables the principle agent to delegate specialised components of the duty to smaller, targeted brokers.
  • Access to a File System: Provides persistent reminiscence for saving work-in-progress, notes, and last outputs, permitting the agent to proceed the place it left off.
  • A Detailed Prompt: Gives the agent clear directions, context, and constraints for its long-term aims.

By offering these foundational parts, deepagents makes it simpler for builders to construct highly effective, general-purpose brokers that may plan, handle state, and execute complicated workflows successfully.

In this text, we’ll take a have a look at a sensible instance to see how DeepAgents truly work in motion. Check out the FULL CODES here.

Core Capabilities of DeepAgents

1. Planning and Task Breakdown: DeepAgents include a built-in write_todos software that helps brokers break giant duties into smaller, manageable steps. They can observe their progress and alter the plan as they study new data.

2. Context Management: Using file instruments like ls, read_file, write_file, and edit_file, brokers can retailer data outdoors their short-term reminiscence. This prevents context overflow and lets them deal with bigger or extra detailed duties easily.

3. Sub-Agent Creation: The built-in process software permits an agent to create smaller, targeted sub-agents. These sub-agents work on particular components of a downside with out cluttering the principle agent’s context.

4. Long-Term Memory: With help from LangGraph’s Store, brokers can bear in mind data throughout classes. This means they’ll recall previous work, proceed earlier conversations, and construct on earlier progress.

Setting up dependencies

!pip set up deepagents tavily-python langchain-google-genai langchain-openai

Environment Variables

In this tutorial, we’ll use the OpenAI API key to energy our Deep Agent. However, for reference, we’ll additionally present how you should use a Gemini mannequin as an alternative.

You’re free to select any mannequin supplier you like — OpenAI, Gemini, Anthropic, or others — as DeepAgents works seamlessly with totally different backends. Check out the FULL CODES here.

import os
from getpass import getpass
os.environ['TAVILY_API_KEY'] = getpass('Enter Tavily API Key: ')
os.environ['OPENAI_API_KEY'] = getpass('Enter OpenAI API Key: ')
os.environ['GOOGLE_API_KEY'] = getpass('Enter Google API Key: ')

Importing the required libraries

import os
from typing import Literal
from tavily import TavilyConsumer
from deepagents import create_deep_agent


tavily_client = TavilyConsumer()

Tools

Just like common tool-using brokers, a Deep Agent may also be geared up with a set of instruments to assist it carry out duties.

In this instance, we’ll give our agent entry to a Tavily Search software, which it will possibly use to collect real-time data from the online. Check out the FULL CODES here.

from typing import Literal
from langchain.chat_models import init_chat_model
from deepagents import create_deep_agent


def internet_search(
    question: str,
    max_results: int = 5,
    matter: Literal["general", "news", "finance"] = "common",
    include_raw_content: bool = False,
):
    """Run a internet search"""
    search_docs = tavily_client.search(
        question,
        max_results=max_results,
        include_raw_content=include_raw_content,
        matter=matter,
    )
    return search_docs

Sub-Agents

Subagents are some of the highly effective options of Deep Agents. They permit the principle agent to delegate particular components of a complicated process to smaller, specialised brokers — every with its personal focus, instruments, and directions. This helps preserve the principle agent’s context clear and organized whereas nonetheless permitting for deep, targeted work on particular person subtasks.

In our instance, we outlined two subagents:

  • policy-research-agent — a specialised researcher that conducts in-depth evaluation on AI insurance policies, laws, and moral frameworks worldwide. It makes use of the internet_search software to collect real-time data and produces a well-structured, skilled report.
  • policy-critique-agent — an editorial agent accountable for reviewing the generated report for accuracy, completeness, and tone. It ensures that the analysis is balanced, factual, and aligned with regional authorized frameworks.

Together, these subagents allow the principle Deep Agent to carry out analysis, evaluation, and high quality overview in a structured, modular workflow. Check out the FULL CODES here.

sub_research_prompt = """
You are a specialised AI coverage researcher.
Conduct in-depth analysis on authorities insurance policies, world laws, and moral frameworks associated to synthetic intelligence.

Your reply ought to:
- Provide key updates and tendencies
- Include related sources and legal guidelines (e.g., EU AI Act, U.S. Executive Orders)
- Compare world approaches when related
- Be written in clear, skilled language

Only your FINAL message will likely be handed again to the principle agent.
"""

research_sub_agent = {
    "identify": "policy-research-agent",
    "description": "Used to analysis particular AI coverage and regulation questions in depth.",
    "system_prompt": sub_research_prompt,
    "instruments": [internet_search],
}


sub_critique_prompt = """
You are a coverage editor reviewing a report on AI governance.
Check the report at `final_report.md` and the query at `query.txt`.

Focus on:
- Accuracy and completeness of authorized data
- Proper quotation of coverage paperwork
- Balanced evaluation of regional variations
- Clarity and neutrality of tone

Provide constructive suggestions, however do NOT modify the report immediately.
"""

critique_sub_agent = {
    "identify": "policy-critique-agent",
    "description": "Critiques AI coverage analysis stories for completeness, readability, and accuracy.",
    "system_prompt": sub_critique_prompt,
}

System Prompt

Deep Agents embrace a built-in system immediate that serves as their core set of directions. This immediate is impressed by the system immediate used in Claude Code and is designed to be extra general-purpose, offering steering on how to use built-in instruments like planning, file system operations, and subagent coordination.

However, whereas the default system immediate makes Deep Agents succesful out of the field, it’s extremely really helpful to outline a customized system immediate tailor-made to your particular use case. Prompt design performs a essential position in shaping the agent’s reasoning, construction, and total efficiency.

In our instance, we outlined a customized immediate referred to as policy_research_instructions, which transforms the agent into an professional AI coverage researcher. It clearly outlines a step-by-step workflow — saving the query, utilizing the analysis subagent for evaluation, writing the report, and optionally invoking the critique subagent for overview. It additionally enforces finest practices reminiscent of Markdown formatting, quotation model, and skilled tone to guarantee the ultimate report meets high-quality coverage requirements. Check out the FULL CODES here.

policy_research_instructions = """
You are an professional AI coverage researcher and analyst.
Your job is to examine questions associated to world AI regulation, ethics, and governance frameworks.

1️⃣ Save the person's query to `query.txt`
2️⃣ Use the `policy-research-agent` to carry out in-depth analysis
3️⃣ Write a detailed report to `final_report.md`
4️⃣ Optionally, ask the `policy-critique-agent` to critique your draft
5️⃣ Revise if obligatory, then output the ultimate, complete report

When writing the ultimate report:
- Use Markdown with clear sections (## for every)
- Include citations in [Title](URL) format
- Add a ### Sources part on the finish
- Write in skilled, impartial tone appropriate for coverage briefings
"""

Main Agent

Here we outline our primary Deep Agent utilizing the create_deep_agent() perform. We initialize the mannequin with OpenAI’s gpt-4o, however as proven in the commented-out line, you may simply swap to Google’s Gemini 2.5 Flash mannequin in case you favor. The agent is configured with the internet_search software, our customized policy_research_instructions system immediate, and two subagents — one for in-depth analysis and one other for critique.

By default, DeepAgents internally makes use of Claude Sonnet 4.5 as its mannequin if none is explicitly specified, however the library permits full flexibility to combine OpenAI, Gemini, Anthropic, or different LLMs supported by LangChain. Check out the FULL CODES here.

mannequin = init_chat_model(mannequin="openai:gpt-4o")
# mannequin = init_chat_model(mannequin="google_genai:gemini-2.5-flash")
agent = create_deep_agent(
    mannequin=mannequin,
    instruments=[internet_search],
    system_prompt=policy_research_instructions,
    subagents=[research_sub_agent, critique_sub_agent],
)

Invoking the Agent

question = "What are the most recent updates on the EU AI Act and its world affect?"
consequence = agent.invoke({"messages": [{"role": "user", "content": query}]})

Check out the FULL CODES here. Feel free to try our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to observe us on Twitter and don’t neglect to be a part of our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The put up Meet LangChain’s DeepAgents Library and a Practical Example to See How DeepAgents Actually Work in Action appeared first on MarkTechPost.

Similar Posts