DenseGRPO: From Sparse to Dense Reward for Flow Matching Model Alignment
Abstract
DenseGRPO addresses sparse reward problems in flow matching models by introducing dense rewards for intermediate denoising steps and adaptive exploration calibration.
Recent GRPO-based approaches built on flow matching models have shown remarkable improvements in human preference alignment for text-to-image generation. Nevertheless, they still suffer from the sparse reward problem: the terminal reward of the entire denoising trajectory is applied to all intermediate steps, resulting in a mismatch between the global feedback signals and the exact fine-grained contributions at intermediate denoising steps. To address this issue, we introduce DenseGRPO, a novel framework that aligns human preference with dense rewards, which evaluates the fine-grained contribution of each denoising step. Specifically, our approach includes two key components: (1) we propose to predict the step-wise reward gain as dense reward of each denoising step, which applies a reward model on the intermediate clean images via an ODE-based approach. This manner ensures an alignment between feedback signals and the contributions of individual steps, facilitating effective training; and (2) based on the estimated dense rewards, a mismatch drawback between the uniform exploration setting and the time-varying noise intensity in existing GRPO-based methods is revealed, leading to an inappropriate exploration space. Thus, we propose a reward-aware scheme to calibrate the exploration space by adaptively adjusting a timestep-specific stochasticity injection in the SDE sampler, ensuring a suitable exploration space at all timesteps. Extensive experiments on multiple standard benchmarks demonstrate the effectiveness of the proposed DenseGRPO and highlight the critical role of the valid dense rewards in flow matching model alignment.
Community
A dense reward for RL in flow matching models.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Anchoring Values in Temporal and Group Dimensions for Flow Matching Model Alignment (2025)
- TAGRPO: Boosting GRPO on Image-to-Video Generation with Direct Trajectory Alignment (2026)
- TreeGRPO: Tree-Advantage GRPO for Online RL Post-Training of Diffusion Models (2025)
- FlowSE-GRPO: Training Flow Matching Speech Enhancement via Online Reinforcement Learning (2026)
- E-GRPO: High Entropy Steps Drive Effective Reinforcement Learning for Flow Models (2026)
- SuperFlow: Training Flow Matching Models with RL on the Fly (2025)
- HyperAlign: Hypernetwork for Efficient Test-Time Alignment of Diffusion Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper