Can LLM Reward Models Be Trusted? Master-RM Exposes and Fixes Their Weaknesses
Generative reward models, where large language models (LLMs) serve as evaluators, are gaining prominence in reinforcement learning with verifiable rewards (RLVR). These models are preferred over rule-based systems for tasks involving open-ended or complex responses. Instead of relying on strict rules, LLMs compare a candidate response to a reference answer and generate binary feedback. However,…
