|

How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains

▶

In this tutorial, we discover the newest Gemini API tooling updates Google announced in March 2026, specifically the ability to combine built-in tools like Google Search and Google Maps with custom function calls in a single API request. We stroll via 5 hands-on demos that progressively construct on one another, beginning with the core device mixture characteristic and ending with a full multi-tool agentic chain. Along the way in which, we exhibit how context circulation preserves each device name and response throughout turns, enabling the mannequin to purpose over prior outputs; how distinctive device response IDs allow us to map parallel perform calls to their actual outcomes; and how Grounding with Google Maps brings real-time location knowledge into our functions. We use gemini-3-flash-preview for device mixture options and gemini-2.5-flash for Maps grounding, so every little thing we construct right here runs with none billing setup.

import subprocess, sys


subprocess.check_call(
   [sys.executable, "-m", "pip", "install", "-qU", "google-genai"],
   stdout=subprocess.DEVNULL,
   stderr=subprocess.DEVNULL,
)


import getpass, json, textwrap, os, time
from google import genai
from google.genai import varieties


if "GOOGLE_API_KEY" not in os.environ:
   os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Gemini API key: ")


consumer = genai.Client(api_key=os.environ["GOOGLE_API_KEY"])


TOOL_COMBO_MODEL = "gemini-3-flash-preview"
MAPS_MODEL       = "gemini-2.5-flash"


DIVIDER = "=" * 72


def heading(title: str):
   print(f"n{DIVIDER}")
   print(f"  {title}")
   print(DIVIDER)


def wrap(textual content: str, width: int = 80):
   for line in textual content.splitlines():
       print(textwrap.fill(line, width=width) if line.strip() else "")


def describe_parts(response):
   elements = response.candidates[0].content material.elements
   fc_ids = {}
   for i, half in enumerate(elements):
       prefix = f"   Part {i:second}:"
       if hasattr(half, "tool_call") and half.tool_call:
           tc = half.tool_call
           print(f"{prefix} [toolCall]        kind={tc.tool_type}  id={tc.id}")
       if hasattr(half, "tool_response") and half.tool_response:
           tr = half.tool_response
           print(f"{prefix} [toolResponse]    kind={tr.tool_type}  id={tr.id}")
       if hasattr(half, "executable_code") and half.executable_code:
           code = half.executable_code.code[:90].exchange("n", " ↵ ")
           print(f"{prefix} [executableCode]  {code}...")
       if hasattr(half, "code_execution_result") and half.code_execution_result:
           out = (half.code_execution_result.output or "")[:90]
           print(f"{prefix} [codeExecResult]  {out}")
       if hasattr(half, "function_call") and half.function_call:
           fc = half.function_call
           fc_ids[fc.name] = fc.id
           print(f"{prefix} [functionCall]    title={fc.title}  id={fc.id}")
           print(f"              └─ args: {dict(fc.args)}")
       if hasattr(half, "textual content") and half.textual content:
           snippet = half.textual content[:110].exchange("n", " ")
           print(f"{prefix} [text]            {snippet}...")
       if hasattr(half, "thought_signature") and half.thought_signature:
           print(f"              └─ thought_signature current ✓")
   return fc_ids




heading("DEMO 1: Combine Google Search + Custom Function in One Request")


print("""
This demo exhibits the flagship new characteristic: passing BOTH a built-in device
(Google Search) and a customized perform declaration in a single API name.


Gemini will:
 Turn 1 → Search the online for real-time data, then request our customized
          perform to get climate knowledge.
 Turn 2 → We provide the perform response; Gemini synthesizes every little thing.


Key factors:
 • google_search and function_declarations go in the SAME Tool object
 • include_server_side_tool_invocations have to be True (on ToolConfig)
 • Return ALL elements (incl. thought_signatures) in subsequent turns
""")


get_weather_func = varieties.FunctionDeclaration(
   title="getWeather",
   description="Gets the present climate for a requested metropolis.",
   parameters=varieties.Schema(
       kind="OBJECT",
       properties={
           "metropolis": varieties.Schema(
               kind="STRING",
               description="The metropolis and state, e.g. Utqiagvik, Alaska",
           ),
       },
       required=["city"],
   ),
)


print("▶  Turn 1: Sending immediate with Google Search + getWeather instruments...n")


response_1 = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=(
       "What is the northernmost metropolis in the United States? "
       "What's the climate like there at present?"
   ),
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[get_weather_func],
           ),
       ],
       tool_config=varieties.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Parts returned by the mannequin:n")
fc_ids = describe_parts(response_1)


function_call_id = fc_ids.get("getWeather")
print(f"n   ✅ Captured function_call id for getWeather: {function_call_id}")


print("n▶  Turn 2: Returning perform consequence & requesting last synthesis...n")


historical past = [
   types.Content(
       role="user",
       parts=[
           types.Part(
               text=(
                   "What is the northernmost city in the United States? "
                   "What's the weather like there today?"
               )
           )
       ],
   ),
   response_1.candidates[0].content material,
   varieties.Content(
       position="person",
       elements=[
           types.Part(
               function_response=types.FunctionResponse(
                   name="getWeather",
                   response={"response": "Very cold. 22°F / -5.5°C with strong Arctic winds."},
                   id=function_call_id,
               )
           )
       ],
   ),
]


response_2 = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=historical past,
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[get_weather_func],
           ),
       ],
       tool_config=varieties.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   ✅ Final synthesized response:n")
for half in response_2.candidates[0].content material.elements:
   if hasattr(half, "textual content") and half.textual content:
       wrap(half.textual content)

we set up the Google GenAI SDK, securely seize our API key, and outline the helper features that energy the remainder of the tutorial. We then exhibit the flagship device mixture characteristic by sending a single request that pairs Google Search with a customized getWeather perform, letting Gemini search the online for real-time geographic knowledge and concurrently request climate data from our customized device. We full the two-turn movement by returning our simulated climate response with the matching perform name ID and watching Gemini synthesize each knowledge sources into one coherent reply.

heading("DEMO 2: Tool Response IDs for Parallel Function Calls")


print("""
When Gemini makes a number of perform calls in one flip, every will get a distinctive
`id` subject. You MUST return every function_response with its matching id
so the mannequin maps outcomes appropriately. This is vital for parallel calls.
""")


time.sleep(2)


lookup_inventory = varieties.FunctionDeclaration(
   title="lookupInventory",
   description="Check product stock by SKU.",
   parameters=varieties.Schema(
       kind="OBJECT",
       properties={
           "sku": varieties.Schema(kind="STRING", description="Product SKU code"),
       },
       required=["sku"],
   ),
)


get_shipping_estimate = varieties.FunctionDeclaration(
   title="getShippingEstimate",
   description="Get delivery time estimate for a vacation spot zip code.",
   parameters=varieties.Schema(
       kind="OBJECT",
       properties={
           "zip_code": varieties.Schema(kind="STRING", description="Destination ZIP code"),
           "sku": varieties.Schema(kind="STRING", description="Product SKU"),
       },
       required=["zip_code", "sku"],
   ),
)


print("▶  Turn 1: Asking about product availability + delivery...n")


resp_parallel = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=(
       "I need to purchase SKU-A100 (wi-fi headphones). "
       "Is it in inventory, and how briskly can it ship to ZIP 90210?"
   ),
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               function_declarations=[lookup_inventory, get_shipping_estimate],
           ),
       ],
   ),
)


fc_parts = []
for half in resp_parallel.candidates[0].content material.elements:
   if hasattr(half, "function_call") and half.function_call:
       fc = half.function_call
       fc_parts.append(fc)
       print(f"   [functionCall] title={fc.title}  id={fc.id}  args={dict(fc.args)}")


print("n▶  Turn 2: Returning outcomes with matching IDs...n")


simulated_results = {
   "lookupInventory": {"in_stock": True, "amount": 342, "warehouse": "Los Angeles"},
   "getShippingEstimate": {"days": 2, "provider": "FedEx", "price": "$5.99"},
}


fn_response_parts = []
for fc in fc_parts:
   consequence = simulated_results.get(fc.title, {"error": "unknown perform"})
   fn_response_parts.append(
       varieties.Part(
           function_response=varieties.FunctionResponse(
               title=fc.title,
               response=consequence,
               id=fc.id,
           )
       )
   )
   print(f"   Responding to {fc.title} (id={fc.id}) → {consequence}")


history_parallel = [
   types.Content(
       role="user",
       parts=[
           types.Part(
               text=(
                   "I want to buy SKU-A100 (wireless headphones). "
                   "Is it in stock, and how fast can it ship to ZIP 90210?"
               )
           )
       ],
   ),
   resp_parallel.candidates[0].content material,
   varieties.Content(position="person", elements=fn_response_parts),
]


resp_parallel_2 = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=history_parallel,
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               function_declarations=[lookup_inventory, get_shipping_estimate],
           ),
       ],
   ),
)


print("n   ✅ Final reply:n")
for half in resp_parallel_2.candidates[0].content material.elements:
   if hasattr(half, "textual content") and half.textual content:
       wrap(half.textual content)

We declare two customized features, lookupInventory and getShippingEstimate, and ship a immediate that naturally triggers each in a single flip. We observe that Gemini assigns every perform name a distinctive ID, which we rigorously match when setting up our simulated responses for stock availability and delivery pace. We then cross the whole historical past again to the mannequin and obtain a last reply that seamlessly combines each outcomes into a customer-ready response.

heading("DEMO 3: Grounding with Google Maps — Location-Aware Responses")


print("""
Grounding with Google Maps connects Gemini to real-time Maps knowledge:
locations, rankings, hours, evaluations, and instructions. Pass lat/lng for
hyper-local outcomes. Available on Gemini 2.5 Flash / 2.0 Flash (free).
""")


time.sleep(2)


print("▶  3a: Finding eating places close to a particular location...n")


maps_response = consumer.fashions.generate_content(
   mannequin=MAPS_MODEL,
   contents="What are the perfect Italian eating places inside a 15-minute stroll from right here?",
   config=varieties.GenerateContentConfig(
       instruments=[types.Tool(google_maps=types.GoogleMaps())],
       tool_config=varieties.ToolConfig(
           retrieval_config=varieties.RetrievalConfig(
               lat_lng=varieties.LatLng(latitude=34.050481, longitude=-118.248526),
           )
       ),
   ),
)


print("   Generated Response:n")
wrap(maps_response.textual content)


if grounding := maps_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   {'─' * 50}")
       print("   📍 Google Maps Sources:n")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title}")
               print(f"     {chunk.maps.uri}n")


time.sleep(2)
print(f"n{'─' * 72}")
print("▶  3b: Asking detailed questions on a particular place...n")


place_response = consumer.fashions.generate_content(
   mannequin=MAPS_MODEL,
   contents="Is there a cafe close to the nook of 1st and Main that has out of doors seating?",
   config=varieties.GenerateContentConfig(
       instruments=[types.Tool(google_maps=types.GoogleMaps())],
       tool_config=varieties.ToolConfig(
           retrieval_config=varieties.RetrievalConfig(
               lat_lng=varieties.LatLng(latitude=34.050481, longitude=-118.248526),
           )
       ),
   ),
)


print("   Generated Response:n")
wrap(place_response.textual content)


if grounding := place_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   📍 Sources:")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title} → {chunk.maps.uri}")


time.sleep(2)
print(f"n{'─' * 72}")
print("▶  3c: Trip planning with the Maps widget token...n")


trip_response = consumer.fashions.generate_content(
   mannequin=MAPS_MODEL,
   contents=(
       "Plan a day in San Francisco for me. I need to see the "
       "Golden Gate Bridge, go to a museum, and have a good dinner."
   ),
   config=varieties.GenerateContentConfig(
       instruments=[types.Tool(google_maps=types.GoogleMaps(enable_widget=True))],
       tool_config=varieties.ToolConfig(
           retrieval_config=varieties.RetrievalConfig(
               lat_lng=varieties.LatLng(latitude=37.78193, longitude=-122.40476),
           )
       ),
   ),
)


print("   Generated Itinerary:n")
wrap(trip_response.textual content)


if grounding := trip_response.candidates[0].grounding_metadata:
   if grounding.grounding_chunks:
       print(f"n   📍 Sources:")
       for chunk in grounding.grounding_chunks:
           if hasattr(chunk, "maps") and chunk.maps:
               print(f"   • {chunk.maps.title} → {chunk.maps.uri}")


   widget_token = getattr(grounding, "google_maps_widget_context_token", None)
   if widget_token:
       print(f"n   🗺  Widget context token acquired ({len(widget_token)} chars)")
       print(f"   Embed in your frontend with:")
       print(f'   <gmp-place-contextual context-token="{widget_token[:60]}...">')
       print(f'   </gmp-place-contextual>')

We change to gemini-2.5-flash and allow Grounding with Google Maps to run three location-aware sub-demos back-to-back. We question for close by Italian eating places utilizing downtown Los Angeles coordinates, ask a detailed query about out of doors seating at a particular intersection, and generate a full-day San Francisco itinerary full with grounding sources and a widget context token. We print each Maps supply URI and title returned in the grounding metadata, exhibiting how straightforward it’s to construct citation-rich, location-aware functions.

heading("DEMO 4: Full Agentic Workflow — Search + Custom Function")


print("""
This combines Google Search grounding with a customized reserving perform,
all in one request. Context circulation lets the mannequin use Search outcomes
to inform which perform to name and with what arguments.


Scenario: "Find a trending restaurant in Austin and ebook a desk."
""")


time.sleep(2)


book_restaurant = varieties.FunctionDeclaration(
   title="bookRestaurant",
   description="Book a desk at a restaurant.",
   parameters=varieties.Schema(
       kind="OBJECT",
       properties={
           "restaurant_name": varieties.Schema(
               kind="STRING", description="Name of the restaurant"
           ),
           "party_size": varieties.Schema(
               kind="INTEGER", description="Number of friends"
           ),
           "date": varieties.Schema(
               kind="STRING", description="Reservation date (YYYY-MM-DD)"
           ),
           "time": varieties.Schema(
               kind="STRING", description="Reservation time (HH:MM)"
           ),
       },
       required=["restaurant_name", "party_size", "date", "time"],
   ),
)


print("▶  Turn 1: Complex multi-tool immediate...n")


agent_response_1 = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=(
       "I'm staying on the Driskill Hotel in Austin, TX. "
       "Find me a highly-rated BBQ restaurant close by that is open tonight, "
       "and ebook a desk for 4 folks at 7:30 PM at present."
   ),
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               google_search=types.GoogleSearch(),
               function_declarations=[book_restaurant],
           ),
       ],
       tool_config=varieties.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Returned elements:n")
fc_ids = describe_parts(agent_response_1)
booking_call_id = fc_ids.get("bookRestaurant")


if booking_call_id:
   print(f"n▶  Turn 2: Simulating reserving affirmation...n")


   history_agent = [
       types.Content(
           role="user",
           parts=[
               types.Part(
                   text=(
                       "I'm staying at the Driskill Hotel in Austin, TX. "
                       "Find me a highly-rated BBQ restaurant nearby that's "
                       "open tonight, and book a table for 4 people at 7:30 PM today."
                   )
               )
           ],
       ),
       agent_response_1.candidates[0].content material,
       varieties.Content(
           position="person",
           elements=[
               types.Part(
                   function_response=types.FunctionResponse(
                       name="bookRestaurant",
                       response={
                           "status": "confirmed",
                           "confirmation_number": "BBQ-2026-4821",
                           "message": "Table for 4 confirmed at 7:30 PM tonight.",
                       },
                       id=booking_call_id,
                   )
               )
           ],
       ),
   ]


   agent_response_2 = consumer.fashions.generate_content(
       mannequin=TOOL_COMBO_MODEL,
       contents=history_agent,
       config=varieties.GenerateContentConfig(
           instruments=[
               types.Tool(
                   google_search=types.GoogleSearch(),
                   function_declarations=[book_restaurant],
               ),
           ],
           tool_config=varieties.ToolConfig(
               include_server_side_tool_invocations=True,
           ),
       ),
   )


   print("   ✅ Final agent response:n")
   for half in agent_response_2.candidates[0].content material.elements:
       if hasattr(half, "textual content") and half.textual content:
           wrap(half.textual content)
else:
   print("n   ℹ  Model didn't request bookRestaurant — exhibiting textual content response:n")
   for half in agent_response_1.candidates[0].content material.elements:
       if hasattr(half, "textual content") and half.textual content:
           wrap(half.textual content)

We mix Google Search with a customized bookRestaurant perform to simulate a real looking end-to-end agent state of affairs set in Austin, Texas. We ship a single immediate to Gemini, asking it to discover a extremely rated BBQ restaurant close to the Driskill Hotel and ebook a desk for 4. We examine the returned elements to see how the mannequin first searches the online and then calls our reserving perform with the small print it discovers. We shut the loop by supplying a simulated affirmation response and letting Gemini ship the ultimate reservation abstract to the person.

heading("DEMO 5: Context Circulation — Code Execution + Search + Function")


print("""
Context circulation preserves EVERY device name and response in the mannequin's
context, so later steps can reference earlier outcomes.  Here we mix:
 • Google Search (lookup knowledge)
 • Code Execution (compute one thing with it)
 • Custom perform (save the consequence)


The mannequin chains these instruments autonomously utilizing context from every step.
""")


time.sleep(2)


save_result = varieties.FunctionDeclaration(
   title="saveAnalysisResult",
   description="Save a computed evaluation consequence to the database.",
   parameters=varieties.Schema(
       kind="OBJECT",
       properties={
           "title": varieties.Schema(kind="STRING", description="Title of the evaluation"),
           "abstract": varieties.Schema(kind="STRING", description="Summary of findings"),
           "worth": varieties.Schema(kind="NUMBER", description="Key numeric consequence"),
       },
       required=["title", "summary", "value"],
   ),
)


print("▶  Turn 1: Research + compute + save (3-tool chain)...n")


circ_response = consumer.fashions.generate_content(
   mannequin=TOOL_COMBO_MODEL,
   contents=(
       "Search for the present US nationwide debt determine, then use code execution "
       "to calculate the per-capita debt assuming a inhabitants of 335 million. "
       "Finally, save the consequence utilizing the saveAnalysisResult perform."
   ),
   config=varieties.GenerateContentConfig(
       instruments=[
           types.Tool(
               google_search=types.GoogleSearch(),
               code_execution=types.ToolCodeExecution(),
               function_declarations=[save_result],
           ),
       ],
       tool_config=varieties.ToolConfig(
           include_server_side_tool_invocations=True,
       ),
   ),
)


print("   Parts returned (full context circulation chain):n")
fc_ids = describe_parts(circ_response)
save_call_id = fc_ids.get("saveAnalysisResult")


if save_call_id:
   print(f"n▶  Turn 2: Confirming the save operation...n")


   history_circ = [
       types.Content(
           role="user",
           parts=[
               types.Part(
                   text=(
                       "Search for the current US national debt figure, then use code "
                       "execution to calculate the per-capita debt assuming a population "
                       "of 335 million. Finally, save the result using the "
                       "saveAnalysisResult function."
                   )
               )
           ],
       ),
       circ_response.candidates[0].content material,
       varieties.Content(
           position="person",
           elements=[
               types.Part(
                   function_response=types.FunctionResponse(
                       name="saveAnalysisResult",
                       response={"status": "saved", "record_id": "analysis-001"},
                       id=save_call_id,
                   )
               )
           ],
       ),
   ]


   circ_response_2 = consumer.fashions.generate_content(
       mannequin=TOOL_COMBO_MODEL,
       contents=history_circ,
       config=varieties.GenerateContentConfig(
           instruments=[
               types.Tool(
                   google_search=types.GoogleSearch(),
                   code_execution=types.ToolCodeExecution(),
                   function_declarations=[save_result],
               ),
           ],
           tool_config=varieties.ToolConfig(
               include_server_side_tool_invocations=True,
           ),
       ),
   )


   print("   ✅ Final response:n")
   for half in circ_response_2.candidates[0].content material.elements:
       if hasattr(half, "textual content") and half.textual content:
           wrap(half.textual content)
else:
   print("n   ℹ  Model accomplished with out requesting saveAnalysisResult.")
   for half in circ_response.candidates[0].content material.elements:
       if hasattr(half, "textual content") and half.textual content:
           wrap(half.textual content)




heading("✅ ALL DEMOS COMPLETE")
print("""
  Summary of what you have seen:


  1. Tool Combination   — Google Search + customized features in one name
  2. Tool Response IDs  — Unique IDs for parallel perform name mapping
  3. Maps Grounding     — Location-aware queries with actual Maps knowledge
  4. Agentic Workflow   — Search + reserving perform with context circulation
  5. Context Circulation — Search + Code Execution + customized perform chain


  Key API patterns:
  ┌──────────────────────────────────────────────────────────────────┐
  │  instruments=[types.Tool(                                             │
  │      google_search=types.GoogleSearch(),                        │
  │      code_execution=types.ToolCodeExecution(),                  │
  │      function_declarations=[my_func],                           │
  │  )]                                                             │
  │                                                                 │
  │  tool_config=varieties.ToolConfig(                                  │
  │      include_server_side_tool_invocations=True,                 │
  │  )                                                              │
  │                                                                 │
  └──────────────────────────────────────────────────────────────────┘


  Models:
  • Tool mixture:  gemini-3-flash-preview (Gemini 3 solely)
  • Maps grounding:    gemini-2.5-flash / gemini-2.5-pro / gemini-2.0-flash
  • Both options use the FREE tier with price limits.


  Docs:
    https://ai.google.dev/gemini-api/docs/tool-combination
    https://ai.google.dev/gemini-api/docs/maps-grounding
""")

We push context circulation to its fullest by chaining three instruments, Google Search, Code Execution, and a customized saveAnalysisResult perform, in a single request that researches the US nationwide debt, computes the per-capita determine, and saves the output. We examine the total chain of returned elements, deviceCall, deviceResponse, executableCode, codeExecutionOutcome, and performCall, to see precisely how context flows from one device to the following throughout a single era. We wrap up by confirming the save operation and printing a abstract of each key API sample now we have coated throughout all 5 demos.

In conclusion, we now have a sensible understanding of the important thing patterns that energy agentic workflows in the Gemini API. We see that the include_server_side_tool_invocations flag on ToolConfig is the one change that unlocks device mixture and context circulation, which returns all elements, together with thought_signature fields, verbatim in our dialog historical past, is non-negotiable for multi-turn flows, and that matching each function_response.id to its corresponding function_call.id is what retains parallel execution dependable. We additionally see how Maps grounding opens up a whole class of location-aware functions with simply a few strains of configuration. From right here, we encourage extending these patterns by combining URL Context or File Search with customized features, wiring actual backend APIs in place of our simulated responses, or constructing conversational brokers that chain dozens of instruments throughout many turns.


Check out the Full Codes here.  Also, be at liberty to comply with us on Twitter and don’t overlook to be part of our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to companion with us for selling your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar and many others.? Connect with us

The put up How to Combine Google Search, Google Maps, and Custom Functions in a Single Gemini API Call With Context Circulation, Parallel Tool IDs, and Multi-Step Agentic Chains appeared first on MarkTechPost.

Similar Posts