BBQ-to-Image: Numeric Bounding Box and Qolor Control in Large-Scale Text-to-Image Models
Abstract
BBQ is a text-to-image model that enables precise numeric control over object attributes through structured-text conditioning without architectural changes.
Text-to-image models have rapidly advanced in realism and controllability, with recent approaches leveraging long, detailed captions to support fine-grained generation. However, a fundamental parametric gap remains: existing models rely on descriptive language, whereas professional workflows require precise numeric control over object location, size, and color. In this work, we introduce BBQ, a large-scale text-to-image model that directly conditions on numeric bounding boxes and RGB triplets within a unified structured-text framework. We obtain precise spatial and chromatic control by training on captions enriched with parametric annotations, without architectural modifications or inference-time optimization. This also enables intuitive user interfaces such as object dragging and color pickers, replacing ambiguous iterative prompting with precise, familiar controls. Across comprehensive evaluations, BBQ achieves strong box alignment and improves RGB color fidelity over state-of-the-art baselines. More broadly, our results support a new paradigm in which user intent is translated into an intermediate structured language, consumed by a flow-based transformer acting as a renderer and naturally accommodating numeric parameters.
Community
Fibo BBQ: Bounding Box & Qolor Control in Large-Scale Text-to-Image Models
Text prompts are a terrible UI for precision
It’s much more intuitive to drag objects into place or use a color picker than to write “put it 20% left, make it teal, slightly bigger…”
Fibo BBQ demonstrates that we can train large-scale T2I models with numeric parameters (e.g., positions / boxes / colors) as part of a structured caption, at scale.
This is a step toward richer controllability: more “knobs” beyond text, and new UI paradigms that feel like design tools, not prompt engineering.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Spatial Chain-of-Thought: Bridging Understanding and Generation Models for Spatial Reasoning Generation (2026)
- VENUS: Visual Editing with Noise Inversion Using Scene Graphs (2026)
- PokeFusion Attention: Enhancing Reference-Free Style-Conditioned Generation (2026)
- PLACID: Identity-Preserving Multi-Object Compositing via Video Diffusion with Synthetic Trajectories (2026)
- Controllable Layered Image Generation for Real-World Editing (2026)
- DEIG: Detail-Enhanced Instance Generation with Fine-Grained Semantic Control (2026)
- TherA: Thermal-Aware Visual-Language Prompting for Controllable RGB-to-Thermal Infrared Translation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
