|

Microsoft Releases Agent Lightning: A New AI Framework that Enables Reinforcement Learning (RL)-based Training of LLMs for Any AI Agent

How do you change actual agent traces into reinforcement studying RL transitions to enhance coverage LLMs with out altering your current agent stack? Microsoft AI staff releases Agent Lightning to assist optimize multi-agent programs. Agent Lightning is a open-sourced framework that makes reinforcement studying work for any AI agent with out rewrites. It separates coaching from execution, defines a unified hint format, and introduces LightningRL, a hierarchical technique that converts complicated agent runs into transitions that commonplace single flip RL trainers can optimize.

What Agent Lightning does?

The framework fashions an agent as a choice course of. It formalizes the agent as {a partially} observable Markov resolution course of the place the statement is the present enter to the coverage LLM, the motion is the mannequin name, and the reward may be terminal or intermediate. From every run it extracts solely the calls made by the coverage mannequin, together with inputs, outputs, and rewards. This trims away different framework noise and yields clear transitions for coaching.

LightningRL performs credit score project throughout multi step episodes, then optimizes the coverage with a single flip RL goal. The analysis staff describes compatibility with single flip RL strategies. In observe, groups usually use trainers that implement PPO or GRPO, equivalent to VeRL, which inserts this interface.

https://arxiv.org/pdf/2508.03680v1

System structure

Agent Lightning makes use of Training Agent Disaggregation. A Lightning Server runs coaching and serving, and exposes an OpenAI like API for the up to date mannequin. A Lightning Client runs the agent runtime the place it already lives, captures traces of prompts, software calls, and rewards, and streams them again to the server. This retains instruments, browsers, shells, and different dependencies near manufacturing whereas the GPU coaching stays within the server tier.

https://arxiv.org/pdf/2508.03680v1

The runtime helps two tracing paths. A default path makes use of OpenTelemetry spans, so you’ll be able to pipe agent telemetry by commonplace collectors. There can also be a light-weight embedded tracer for groups that don’t wish to deploy OpenTelemetry. Both paths find yourself in the identical retailer for coaching.

https://arxiv.org/pdf/2508.03680v1

Unified information interface

Agent Lightning information every mannequin name and every software name as a span with inputs, outputs, and metadata. The algorithm layer adapts spans into ordered triplets of immediate, response, and reward. This selective extraction allows you to optimize one agent in a multi agent workflow, or a number of brokers without delay, with out touching orchestration code. The similar traces may drive computerized immediate optimization or supervised finetuning.

https://arxiv.org/pdf/2508.03680v1

Experiments and datasets

The analysis staff experiences three duties. For textual content to SQL, the staff makes use of the Spider benchmark. Spider accommodates greater than 10,000 questions throughout 200 databases that span 138 domains. The coverage mannequin is Llama 3.2 3B Instruct. The implementation makes use of LangChain with a author agent, a rewriter agent, and a checker. The author and the rewriter are optimized, and the checker is left mounted. Rewards enhance steadily throughout coaching and at take a look at time.

https://arxiv.org/pdf/2508.03680v1

For retrieval augmented technology, the setup makes use of the MuSiQue benchmark and a Wikipedia scale index with about 21 million paperwork. The retriever makes use of BGE embeddings with cosine similarity. The agent is constructed with the OpenAI Agents SDK. The reward is a weighted sum of a format rating and an F1 correctness rating. Reward curves present secure positive aspects throughout coaching and analysis with the identical base mannequin.

https://arxiv.org/pdf/2508.03680v1

For math query answering with software use, the agent is carried out with AutoGen and calls a calculator software. The dataset is Calc X. The base mannequin once more is Llama 3.2 3B Instruct. Training improves the flexibility to invoke instruments appropriately and combine outcomes into closing solutions.

https://arxiv.org/pdf/2508.03680v1

Key Takeaways

  1. Agent Lightning makes use of Training Agent Disaggregation and a unified hint interface, so current brokers in LangChain, OpenAI Agents SDK, AutoGen, or CrewAI join with close to zero code change.
  2. LightningRL converts trajectories to transitions. It applies credit score project to multi step runs, then optimizes the coverage with single flip RL strategies equivalent to PPO or GRPO in commonplace trainers.
  3. Automatic Intermediate Rewarding, AIR, provides dense suggestions. AIR turns system alerts equivalent to software return standing into intermediate rewards to cut back sparse reward points in lengthy workflows.
  4. The analysis evaluates textual content to SQL on Spider, RAG on MuSiQue with a Wikipedia scale index utilizing BGE embeddings and cosine similarity, and math software use on Calc X, all with Llama 3.2 3B Instruct as the bottom mannequin.
  5. The runtime information traces by OpenTelemetry, streams them to the coaching server, and exposes an OpenAI suitable endpoint for up to date fashions, enabling scalable rollouts with out transferring instruments.

Editorial Comments

Agent Lightning is a sensible bridge between agent execution and reinforcement studying, not one other framework rewrite. It formalizes agent runs as an Markov Decision Process (MDP), introduces LightningRL for credit score project, and extracts transitions that slot into single flip RL trainers. The Training Agent Disaggregation design separates a consumer that runs the agent from a server that trains and serves an OpenAI suitable endpoint, so groups maintain current stacks. Automatic Intermediate Rewarding converts runtime alerts into dense suggestions, decreasing sparse rewards in lengthy workflows. Overall, Agent Lightning is a clear, minimal-integration path to make brokers study from their very own traces.


Check out the Paper and Repo. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be happy to comply with us on Twitter and don’t overlook to hitch our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The publish Microsoft Releases Agent Lightning: A New AI Framework that Enables Reinforcement Learning (RL)-based Training of LLMs for Any AI Agent appeared first on MarkTechPost.

Similar Posts