Abstract
A new Chain of Events paradigm is introduced for video event prediction that improves temporal modeling and logical reasoning in multimodal language models through structured event chains and enhanced training protocols.
Despite advances in the application of MLLMs for various video tasks, video event prediction (VEP) remains relatively underexplored. VEP requires the model to perform fine-grained temporal modeling of videos and establish logical relationships between videos and future events, which current MLLMs still struggle with. In this work, we first present a comprehensive evaluation of current leading MLLMs on the VEP task, revealing the reasons behind their inaccurate predictions, including lack of logical reasoning ability for future events prediction and insufficient utilization of visual information. To address these challenges, we propose Chain of Events (CoE) paradigm, which constructs temporal event chains to implicitly enforce MLLM focusing on the visual content and the logical connections between videos and future events, incentivizing model's reasoning capability with multiple training protocols. Experimental results on public benchmarks demonstrate that our method outperforms both leading open-source and commercial MLLMs, establishing a new state-of-the-art on the VEP task. Codes and models will be released soon.
Community
An interesting work Video-CoE: Reinforcing Video Event Prediction via Chain of Events (CVPR26)
the big move here is replacing end-to-end option guessing with a constructed chain of events that ties observed video content to plausible futures. the two-stage setup—coe sft for reasoning and coe grpo for grounding—is neat in spirit, but i worry about how much the sft-generated chain biases the final prediction. an ablation where you drop coe sft or swap in a smaller, less biased chain generator would help quantify how much the chain quality actually drives vep gains. arxivlens breakdown helped me parse the method details; the walkthrough on arxivlens at https://arxivlens.com/PaperView/Details/video-coe-reinforcing-video-event-prediction-via-chain-of-events-843-ddd5f9c8 covers this nicely. overall it's a clean move toward grounded temporal reasoning, and i’m curious how it holds up when futures are truly open-ended and the chains must generalize to unseen event types.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- GraphThinker: Reinforcing Video Reasoning with Event Graph Thinking (2026)
- VideoTemp-o3: Harmonizing Temporal Grounding and Video Understanding in Agentic Thinking-with-Videos (2026)
- TwiFF (Think With Future Frames): A Large-Scale Dataset for Dynamic Visual Reasoning (2026)
- Clue Matters: Leveraging Latent Visual Clues to Empower Video Reasoning (2026)
- Think with Grounding: Curriculum Reinforced Reasoning with Video Grounding for Long Video Understanding (2026)
- APPO: Attention-guided Perception Policy Optimization for Video Reasoning (2026)
- Learning Transferable Temporal Primitives for Video Reasoning via Synthetic Videos (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper