Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
220
1.92k
End of preview. Expand in Data Studio

🎥 RISE-Video: Can Video Generators Decode Implicit World Rules?

Introduction

We present RISE-Video, a pioneering reasoning-oriented benchmark for Text-Image-to-Video (TI2V) synthesis that shifts the evaluative focus from surface-level aesthetics to deep cognitive reasoning.

RISE-Video comprises 467 meticulously human-annotated samples spanning eight rigorous categories: Commonsense Knowledge, Subject Knowledge, Perceptual Knowledge, Societal Knowledge, Logical Capability, Experiential Knowledge, Spatial Knowledge, Temporal Knowledge, providing a structured testbed for probing model intelligence across diverse dimensions.

Our framework introduces a multi-dimensional evaluation protocol consisting of four metrics: Reasoning Alignment, Temporal Consistency, Physical Rationality, and Visual Quality. To further support scalable evaluation, we propose an automated pipeline leveraging Large Multimodal Models (LMMs) to emulate human-centric assessment.


Evaluation pipeline

Specialized Evaluation Pipeline

💪 Usage

This repository provides the input data for RISE-Video, including the first-frame images and text prompts of 467 test cases used in the benchmark. All metadata is centralized in the JSON:

  • image_path: The first frame image.
  • sub_task: The category of the sample (e.g., Subject Knowledge).
  • text: The corresponding text prompt that guides video generation.
  • extra_frame: Manually designed frame sampling strategy for Reasoning Alignment evaluation.
  • questions: A set of manually designed reasoning-related questions for Reasoning Alignment evaluation.
  • reverse: Marks whether the case uses reversed temporal order during Physical Rationality evaluation.
  • task_id: The unique ID of each sample.

🎓 Citation

@misc{liu2026risevideovideogeneratorsdecode,
      title={RISE-Video: Can Video Generators Decode Implicit World Rules?}, 
      author={Mingxin Liu and Shuran Ma and Shibei Meng and Xiangyu Zhao and Zicheng Zhang and Shaofeng Zhang and Zhihang Zhong and Peixian Chen and Haoyu Cao and Xing Sun and Haodong Duan and Xue Yang},
      year={2026},
      eprint={2602.05986},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.05986}, 
}
Downloads last month
21

Paper for VisionXLab/RISE-Video