Apriel-Reasoner: RL Post-Training for General-Purpose and Efficient Reasoning
Abstract
Apriel-Reasoner is a 15B-parameter language model trained with reproducible multi-domain reinforcement learning to improve reasoning efficiency and accuracy across diverse tasks while reducing inference costs.
Building general-purpose reasoning models using reinforcement learning with verifiable rewards (RLVR) across diverse domains has been widely adopted by frontier open-weight models. However, their training recipes and domain mixtures are often not disclosed. Joint optimization across domains poses significant challenges: domains vary widely in rollout length, problem difficulty and sample efficiency. Further, models with long chain-of-thought traces increase inference cost and latency, making efficiency critical for practical deployment. We present Apriel-Reasoner, trained with a fully reproducible multi-domain RL post-training recipe on Apriel-Base, a 15B-parameter open-weight LLM, across five domains using public datasets: mathematics, code generation, instruction following, logical puzzles and function calling. We introduce an adaptive domain sampling mechanism that preserves target domain ratios despite heterogeneous rollout dynamics, and a difficulty-aware extension of the standard length penalty that, with no additional training overhead, encourages longer reasoning for difficult problems and shorter traces for easy ones. Trained with a strict 16K-token output budget, Apriel-Reasoner generalizes to 32K tokens at inference and improves over Apriel-Base on AIME 2025, GPQA, MMLU-Pro, and LiveCodeBench while producing 30-50% shorter reasoning traces. It matches strong open-weight models of similar size at lower token cost, thereby pushing the Pareto frontier of accuracy versus token budget.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Rethinking Easy-to-Hard: Limits of Curriculum Learning in Post-Training for Deductive Reasoning (2026)
- CHIMERA: Compact Synthetic Data for Generalizable LLM Reasoning (2026)
- Batched Contextual Reinforcement: A Task-Scaling Law for Efficient Reasoning (2026)
- Training Large Reasoning Models Efficiently via Progressive Thought Encoding (2026)
- Stepwise Penalization for Length-Efficient Chain-of-Thought Reasoning (2026)
- Reinforcement-aware Knowledge Distillation for LLM Reasoning (2026)
- DeReason: A Difficulty-Aware Curriculum Improves Decoupled SFT-then-RL Training for General Reasoning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper