Google DeepMind’s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts
Designing algorithms for Multi-Agent Reinforcement Learning (MARL) in imperfect-information video games — eventualities the place gamers act sequentially and can’t see one another’s non-public data, like poker — has traditionally relied on handbook iteration. Researchers determine weighting schemes, discounting guidelines, and equilibrium solvers by means of instinct and trial-and-error. Google DeepMind researchers proposes AlphaEvolve, an LLM-powered evolutionary coding agent that replaces that handbook course of with automated search.
The analysis staff applies this framework to 2 established paradigms: Counterfactual Regret Minimization (CFR) and Policy Space Response Oracles (PSRO). In each instances, the system discovers new algorithm variants that carry out competitively towards or higher than present hand-designed state-of-the-art baselines. All experiments have been run utilizing the OpenSpiel framework.
Background: CFR AND PSRO
CFR is an iterative algorithm that decomposes remorse minimization throughout data units. At every iteration it accumulates ‘counterfactual remorse’ — how a lot a participant would have gained by taking part in in another way — and derives a brand new coverage proportional to optimistic collected remorse. Over many iterations, the time-averaged technique converges to a Nash Equilibrium (NE). Variants like DCFR (Discounted CFR) and PCFR+ (Predictive CFR+) enhance convergence by making use of particular discounting or predictive replace guidelines, all developed by means of handbook design.
PSRO operates at the next stage of abstraction. It maintains a inhabitants of insurance policies for every participant, builds a payoff tensor (the meta-game) by computing anticipated utilities for each mixture of inhabitants insurance policies, after which makes use of a meta-strategy solver to supply a chance distribution over the inhabitants. Best responses are educated towards that distribution and added to the inhabitants iteratively. The meta-strategy solver — how the inhabitants distribution is computed — is the central design alternative that the paper targets for automated discovery. All experiments use an actual finest response oracle (computed by way of worth iteration) and actual payoff values for all meta-game entries, eradicating Monte Carlo sampling noise from the outcomes.
THE AlphaEvolve FRAMEWORK
AlphaEvolve is a distributed evolutionary system that makes use of LLMs to mutate supply code fairly than numeric parameters. The course of: a inhabitants is initialized with a normal implementation (CFR+ as the seed for CFR experiments; Uniform as the seed for each PSRO solver lessons). At every technology, a mum or dad algorithm is chosen based mostly on health; its supply code is handed to an LLM (Gemini 2.5 Pro) with a immediate to change it; the ensuing candidate is evaluated on proxy video games; legitimate candidates are added to the inhabitants. AlphaEvolve helps multi-objective optimization — if a number of health metrics are outlined, one is randomly chosen per technology to information mum or dad sampling.
The health sign is damaging exploitability after Okay iterations, evaluated on a hard and fast set of coaching video games: 3-player Kuhn Poker, 2-player Leduc Poker, 4-card Goofspiel, and 5-sided Liars Dice. Final analysis is finished on a separate check set of bigger, unseen video games.
For CFR, the evolvable search house consists of three Python lessons: RegretAccumulator, PolicyFromRegretAccumulator, and PolicyAccumulator. These govern remorse accumulation, present coverage derivation, and common coverage accumulation respectively. The interface is expressive sufficient to symbolize all recognized CFR variants as particular instances. For PSRO, the evolvable parts are TrainMetaStrategySolverand EvalMetaStrategySolver— the meta-strategy solvers used throughout oracle coaching and through exploitability analysis.
Discovered Algorithm 1: VAD-CFR
The advanced CFR variant is Volatility-Adaptive Discounted CFR (VAD-CFR). Rather than the linear averaging and static discounting utilized in the CFR household, the search produced three distinct mechanisms:
- Volatility-adaptive discounting. Instead of fastened low cost elements α and β utilized to cumulative regrets (as in DCFR), VAD-CFR tracks the volatility of the studying course of utilizing an Exponential Weighted Moving Average (EWMA) of the instantaneous remorse magnitude. When volatility is excessive, discounting will increase so the algorithm forgets unstable historical past quicker; when volatility drops it retains extra historical past. The EWMA decay issue is 0.1, with base α = 1.5 and base β = −0.1.
- Asymmetric instantaneous boosting. Positive instantaneous regrets are multiplied by an element of 1.1 earlier than being added to cumulative regrets. This asymmetry is utilized to the instantaneous replace, not the collected historical past, making the algorithm extra reactive to at present good actions.
- Hard warm-start with regret-magnitude weighting. Policy averaging is postponed completely till iteration 500. The remorse accumulation course of continues usually throughout this section. Once accumulation begins, insurance policies are weighted by a mixture of temporal weight and instantaneous remorse magnitude — prioritizing high-information iterations when establishing the common technique. The 500-iteration threshold was generated by the LLM with out information of the 1000-iteration analysis horizon.
VAD-CFR is benchmarked towards customary CFR, CFR+, Linear CFR (LCFR), DCFR, PCFR+, DPCFR+, and HS-PCFR+(30) throughout 1000 iterations with Okay = 1000. Exploitability is computed precisely. On the full 11-game analysis, VAD-CFR matches or surpasses state-of-the-art efficiency in 10 of the 11 video games, with 4-player Kuhn Poker as the sole exception.
| ALSO DISCOVERED: AOD-CFR An earlier trial on a distinct coaching set (2-player Kuhn Poker, 2-player Leduc Poker, 4-card Goofspiel, 4-sided Liars Dice) produced a second variant, Asymmetric Optimistic Discounted CFR (AOD-CFR). It makes use of a linear schedule for discounting cumulative regrets (α transitions from 1.0 → 2.5 over 500 iterations, β from 0.5 → 0.0), sign-dependent scaling of instantaneous remorse, trend-based coverage optimism by way of an Exponential Moving Average of cumulative regrets, and polynomial coverage averaging with an exponent γ scaling from 1.0 → 5.0. The analysis staff experiences it achieves aggressive efficiency utilizing extra standard mechanisms than VAD-CFR. |
Discovered Algorithm 2: SHOR-PSRO
The advanced PSRO variant is Smoothed Hybrid Optimistic Regret PSRO (SHOR-PSRO). The search produced a hybrid meta-solver that constructs a meta-strategy by linearly mixing two parts at each inside solver iteration:
- σ_ORM (Optimistic Regret Matching): Provides regret-minimization stability. Gains are computed, optionally normalized and diversity-adjusted, then used to replace cumulative regrets by way of remorse matching. A momentum time period is utilized to payoff good points.
- σ_Softmax (Smoothed Best Pure Strategy): A Boltzmann distribution over pure methods biased towards high-payoff modes. A temperature parameter controls focus — decrease temperature means the distribution is extra focused on the finest pure technique.
| σ_hybrid = (1 − λ) · σ_ORM + λ · σ_Softmax |
The training-time solver makes use of a dynamic annealing schedule over the outer PSRO iterations. The mixing issue λ anneals from 0.3 → 0.05 (shifting from grasping exploitation towards equilibrium discovering), the range bonus decays from 0.05 → 0.001 (enabling early inhabitants exploration then late-stage refinement), and the softmax temperature drops from 0.5 → 0.01. The variety of inside solver iterations additionally scales with inhabitants dimension. The coaching solver returns the time-averaged technique throughout inside iterations for stability.
The evaluation-time solver makes use of fastened parameters: λ = 0.01, range bonus = 0.0, temperature = 0.001. It runs extra inside iterations (base 8000, scaling with inhabitants dimension) and returns the last-iterate technique fairly than the common, for a reactive, low-noise exploitability estimate. This coaching/analysis asymmetry was itself a product of the search, not a human design alternative.
SHOR-PSRO is benchmarked towards Uniform, Nash (by way of linear program for 2-player video games), AlphaRank, Projected Replicator Dynamics (PRD), and Regret Matching (RM), utilizing Okay = 100 PSRO iterations. On the full 11-game analysis, SHOR-PSRO matches or surpasses state-of-the-art efficiency in 8 of the 11 video games.
Experimental Setup
The analysis protocol separates coaching and check video games to evaluate generalization. The coaching set for each CFR and PSRO experiments consists of 3-player Kuhn Poker, 2-player Leduc Poker, 4-card Goofspiel, and 5-sided Liars Dice. The check set utilized in the principal physique of the paper consists of 4-player Kuhn Poker, 3-player Leduc Poker, 5-card Goofspiel, and 6-sided Liars Dice — bigger and extra advanced variants not seen throughout evolution. A full sweep throughout 11 video games is included in the appendix. Algorithms are fastened after training-phase discovery earlier than check analysis begins.
Key Takeaways
- AlphaEvolve automates algorithm design — as a substitute of tuning hyperparameters, it evolves the precise Python supply code of MARL algorithms utilizing Gemini 2.5 Pro as the mutation operator, discovering completely new replace guidelines fairly than variations of present ones.
- VAD-CFR replaces static discounting with volatility-awareness — it tracks instantaneous remorse magnitude by way of EWMA and adjusts its low cost elements dynamically, plus delays coverage averaging completely till iteration 500, a threshold the LLM discovered with out being instructed the analysis horizon was 1000 iterations.
- SHOR-PSRO automates the exploration-to-exploitation transition — by annealing a mixing issue between Optimistic Regret Matching and a Softmax best-pure-strategy element over coaching, it removes the have to manually tune when a PSRO meta-solver ought to shift from inhabitants range to equilibrium refinement.
- Generalization is examined, not assumed — each algorithms are developed on one set of 4 video games and evaluated on a separate set of bigger, unseen video games. VAD-CFR holds up in 10 of 11 video games; SHOR-PSRO in 8 of 11, with no re-tuning between coaching and check.
- The found mechanisms are non-intuitive by design — issues like a tough warm-start at iteration 500, uneven boosting of optimistic regrets by precisely 1.1, and separate coaching/analysis solver configurations will not be the form of selections human researchers sometimes arrive at, which is that this analysis’s core argument for automated search over this design house.
Check out the Paper. Also, be happy to comply with us on Twitter and don’t overlook to hitch our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
The publish Google DeepMind’s Research Lets an LLM Rewrite Its Own Game Theory Algorithms — And It Outperformed the Experts appeared first on MarkTechPost.
