HyperAlign: Hypernetwork for Efficient Test-Time Alignment of Diffusion Models
Abstract
HyperAlign enhances diffusion model output quality by using a hypernetwork to dynamically adjust denoising trajectories based on input conditions and rewards, improving semantic consistency and visual appeal.
Diffusion models achieve state-of-the-art performance but often fail to generate outputs that align with human preferences and intentions, resulting in images with poor aesthetic quality and semantic inconsistencies. Existing alignment methods present a difficult trade-off: fine-tuning approaches suffer from loss of diversity with reward over-optimization, while test-time scaling methods introduce significant computational overhead and tend to under-optimize. To address these limitations, we propose HyperAlign, a novel framework that trains a hypernetwork for efficient and effective test-time alignment. Instead of modifying latent states, HyperAlign dynamically generates low-rank adaptation weights to modulate the diffusion model's generation operators. This allows the denoising trajectory to be adaptively adjusted based on input latents, timesteps and prompts for reward-conditioned alignment. We introduce multiple variants of HyperAlign that differ in how frequently the hypernetwork is applied, balancing between performance and efficiency. Furthermore, we optimize the hypernetwork using a reward score objective regularized with preference data to reduce reward hacking. We evaluate HyperAlign on multiple extended generative paradigms, including Stable Diffusion and FLUX. It significantly outperforms existing fine-tuning and test-time scaling baselines in enhancing semantic consistency and visual appeal.
Community
HyperAlign: Hypernetwork for Efficient Test-Time Alignment of Diffusion Models
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Direct Diffusion Score Preference Optimization via Stepwise Contrastive Policy-Pair Supervision (2025)
- Unified Control for Inference-Time Guidance of Denoising Diffusion Models (2025)
- Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning (2025)
- Diffusion-DRF: Differentiable Reward Flow for Video Diffusion Fine-Tuning (2026)
- ReDiF: Reinforced Distillation for Few Step Diffusion (2025)
- Data-regularized Reinforcement Learning for Diffusion Models at Scale (2025)
- Beyond Binary Preference: Aligning Diffusion Models to Fine-grained Criteria by Decoupling Attributes (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper