Aletheia: What Makes RLVR For Code Verifiers Tick?
Loading...
Date
2026-01-16
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Description
Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers' robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification being amenable to training- and inference-time scaling, on-policy learning stands out as the key component at smaller verifier sizes, and thinking-based training emerges as the most important component at larger scales.
Citation
Endorsement
DFG Classification
Project(s)
Faculty
Collections
License
Except where otherwise noted, this license is described as CC-BY-NC-SA 4.0 Attribution-NonCommercial-ShareAlike 4.0 International
