Der Login über E-Mail und Passwort wird in Kürze abgeschaltet. Für Externe steht ab sofort der Login über ORCID zur Verfügung.
The login via e-mail and password will be retired in the near future. External uses can login via ORCID from now on.
 
Open Access

Aletheia: What Makes RLVR For Code Verifiers Tick?

Loading...
Thumbnail Image

Date

2026-01-16

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Description

Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers' robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification being amenable to training- and inference-time scaling, on-policy learning stands out as the key component at smaller verifier sizes, and thinking-based training emerges as the most important component at larger scales.

Citation

Endorsement

Project(s)

Faculty

Collections

License

Except where otherwise noted, this license is described as CC-BY-NC-SA 4.0 Attribution-NonCommercial-ShareAlike 4.0 International