|

PokeeResearch-7B: An Open 7B Deep-Research Agent Trained with Reinforcement Learning from AI Feedback (RLAIF) and a Robust Reasoning Scaffold

Pokee AI has open sourced PokeeResearch-7B, a 7B parameter deep analysis agent that executes full analysis loops, decomposes a question, points search and learn calls, verifies candidate solutions, then synthesizes a number of analysis threads into a ultimate response.

The agent runs a analysis and verification loop. In analysis, it calls exterior instruments for internet search and web page studying or proposes an interim reply. In verification, it checks the reply in opposition to retrieved proof, and both accepts or restarts analysis. This construction reduces brittle trajectories and catches apparent errors earlier than finalization. The analysis crew formalizes this loop and provides a test-time synthesis stage that merges a number of unbiased analysis threads.

Training recipe, RLAIF with RLOO

PokeeResearch-7B is finetuned from Qwen2.5-7B-Instruct utilizing an annotation-free Reinforcement Learning from AI Feedback, known as RLAIF, with the REINFORCE Leave-One-Out algorithm, known as RLOO. The reward targets semantic correctness, quotation faithfulness, and instruction adherence, not token overlap. The Model’s Hugging Face card lists batch dimension 64, 8 analysis threads per immediate throughout RL, studying fee 3e-6, 140 steps, context 32,768 tokens, bf16 precision, and a checkpoint close to 13 GB. The analysis crew emphasizes that RLOO gives an unbiased on coverage gradient and contrasts it with the PPO household that’s roughly on coverage and biased.

https://arxiv.org/pdf/2510.15862

Reasoning scaffold and Research Threads Synthesis

The scaffold contains three mechanisms. Self correction, the agent detects malformed device calls and retries. Self verification, the agent inspects its personal reply in opposition to proof. Research Threads Synthesis, the agent runs a number of unbiased threads per query, summarizes them, then synthesizes a ultimate reply. The analysis crew experiences that synthesis improves accuracy on tough benchmarks.

https://arxiv.org/pdf/2510.15862

Evaluation protocol

The analysis crew evaluates textual content solely questions from 10 benchmarks, NQ, TriviaQA, PopQA, HotpotQA, 2WikiMultiHopQA, Musique, Bamboogle, GAIA, BrowseComp, and Humanity’s Last Exam. They pattern 125 questions per dataset, besides GAIA with 103, for a complete of 1,228 questions. For every query, they run 4 analysis threads, then compute imply accuracy, imply at 4, utilizing Gemini-2.5-Flash-lite to evaluate correctness. The most interplay turns are set to 100.

https://github.com/Pokee-AI/PokeeResearchOSS
https://github.com/Pokee-AI/PokeeResearchOSS

Results at 7B scale

PokeeResearch-7B experiences the perfect imply at 4 accuracy amongst 7B deep analysis brokers throughout the ten datasets. On HLE the mannequin experiences 15.2 with out RTS and 17.6 with RTS. On GAIA the mannequin experiences 36.9 with out RTS and 41.3 with RTS. On BrowseComp the mannequin experiences 5.4 with out RTS and 8.4 with RTS. On the seven QA benchmarks, Bamboogle, 2WikiMultiHopQA, TriviaQA, NQ, PopQA, Musique, HotpotQA, the mannequin improves over latest 7B baselines. Gains from RTS are largest on HLE, GAIA, and BrowseComp, and smaller on the QA units.

Key Takeaways

  1. Training: PokeeResearch-7B nice tunes Qwen2.5-7B-Instruct with RLAIF utilizing the RLOO estimator, optimizing rewards for factual accuracy, quotation faithfulness, and instruction adherence, not token overlap.
  2. Scaffold: The agent runs a analysis and verification loop with Research Threads Synthesis, executing a number of unbiased threads, then synthesizing proof to a ultimate reply.
  3. Evaluation protocol: Benchmarks span 10 datasets with 125 questions every, besides GAIA with 103, 4 threads per query, imply@4 accuracy judged by Gemini-2.5-Flash-lite, with a 100 flip cap.
  4. Results and launch: PokeeResearch-7B experiences cutting-edge amongst 7B deep analysis brokers, for instance HLE 17.6 with RTS, GAIA 41.3 with RTS, BrowseComp 8.4 with RTS, and is launched underneath Apache-2.0 with code and weights public.

Editorial Comments

PokeeResearch-7B is a helpful step for sensible deep analysis brokers. It aligns coaching with RLAIF utilizing RLOO, so the target targets semantic correctness, quotation faithfulness, and instruction adherence. The reasoning scaffold contains self verification and Research Threads Synthesis, which improves tough benchmarks. The analysis makes use of imply at 4 with Gemini 2.5 Flash lite because the choose, throughout 10 datasets. The launch ships Apache 2.0 code and weights with a clear device stack utilizing Serper and Jina. The setup runs on a single A100 80 GB and scales.


Check out the Paper, Model on HF and GitHub Repo. Feel free to take a look at our GitHub Page for Tutorials, Codes and Notebooks. Also, be at liberty to comply with us on Twitter and don’t neglect to affix our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

The submit PokeeResearch-7B: An Open 7B Deep-Research Agent Trained with Reinforcement Learning from AI Feedback (RLAIF) and a Robust Reasoning Scaffold appeared first on MarkTechPost.

Similar Posts