Demo-ICL: In-Context Learning for Procedural Video Knowledge Acquisition
Abstract
Researchers introduce a new video understanding task and benchmark that evaluates models' ability to learn from few-shot demonstrations, along with a specialized MLLM architecture trained using a two-stage approach combining video supervision and preference optimization.
Despite the growing video understanding capabilities of recent Multimodal Large Language Models (MLLMs), existing video benchmarks primarily assess understanding based on models' static, internal knowledge, rather than their ability to learn and adapt from dynamic, novel contexts from few examples. To bridge this gap, we present Demo-driven Video In-Context Learning, a novel task focused on learning from in-context demonstrations to answer questions about the target videos. Alongside this, we propose Demo-ICL-Bench, a challenging benchmark designed to evaluate demo-driven video in-context learning capabilities. Demo-ICL-Bench is constructed from 1200 instructional YouTube videos with associated questions, from which two types of demonstrations are derived: (i) summarizing video subtitles for text demonstration; and (ii) corresponding instructional videos as video demonstrations. To effectively tackle this new challenge, we develop Demo-ICL, an MLLM with a two-stage training strategy: video-supervised fine-tuning and information-assisted direct preference optimization, jointly enhancing the model's ability to learn from in-context examples. Extensive experiments with state-of-the-art MLLMs confirm the difficulty of Demo-ICL-Bench, demonstrate the effectiveness of Demo-ICL, and thereby unveil future research directions.
Community
Demo-ICL: In-Context Learning for Procedural Video Knowledge Acquisition
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LiViBench: An Omnimodal Benchmark for Interactive Livestream Video Understanding (2026)
- LongVPO: From Anchored Cues to Self-Reasoning for Long-Form Video Preference Optimization (2026)
- VideoThinker: Building Agentic VideoLLMs with LLM-Guided Tool Reasoning (2026)
- Structured Over Scale: Learning Spatial Reasoning from Educational Video (2026)
- JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation (2025)
- Streaming Video Instruction Tuning (2025)
- A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper