validation_prompt
stringclasses 8
values | validation_images
stringclasses 8
values | validation_trajectory_maps
stringclasses 8
values | output_path
stringclasses 8
values | controlnet_weights
float64 1
1
| seed
int64 8
42
|
|---|---|---|---|---|---|
Two cartoon wizards walking towards each other
|
assets/images/condition/cartoon_wizard.jpg
|
assets/box_trajectory/cartoon_wizard.mp4
|
samples/demo/stage2/cartoon_wizard.mp4
| 1
| 42
|
A unicorn floatie floating in a pool
|
assets/images/condition/floatie.jpg
|
assets/box_trajectory/floatie.mp4
|
samples/demo/stage2/floatie.mp4
| 1
| 18
|
horse moving in the sky with a child on its back
|
assets/images/condition/child_horse.jpg
|
assets/box_trajectory/child_horse.mp4
|
samples/demo/stage2/child_horse.mp4
| 1
| 8
|
A priestess lifting a ball to her head
|
assets/images/condition/priestess.jpg
|
assets/box_trajectory/priestess.mp4
|
samples/demo/stage2/priestess.mp4
| 1
| 42
|
A red crystal elephant and blue crystal rhino walking on ice
|
assets/images/condition/mammoth_rhino3.png
|
assets/mask_trajectory/mammoth_rhino.mp4
|
samples/demo/stage1/mammoth_rhino.mp4
| 1
| 42
|
A royal camel walking inside a palace
|
assets/images/condition/camel_royal.png
|
assets/mask_trajectory/camel.mp4
|
samples/demo/stage1/camel_royal.mp4
| 1
| 42
|
A full moon moves across the night sky with a castle and a bridge below.
|
assets/images/condition/moon_castle.jpg
|
assets/sparse_box_trajectory/moon_castle.mp4
|
samples/demo/stage3/moon_castle.mp4
| 1
| 42
|
A man slowly sinks his head into the water
|
assets/images/condition/man_head.jpg
|
assets/sparse_box_trajectory/man_head.mp4
|
samples/demo/stage3/man_head.mp4
| 1
| 42
|
MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance
Quanhao Li*, Zhen Xing*, Rui Wang, Hui Zhang, Qi Dai, and Zuxuan Wu
* equal contribution
π‘ Abstract
Recent advances in video generation have led to remarkable improvements in visual quality and temporal coherence. Upon this, trajectory-controllable video generation has emerged to enable precise object motion control through explicitly defined spatial paths. However, existing methods struggle with complex object movements and multi-object motion control, resulting in imprecise trajectory adherence, poor object consistency, and compromised visual quality. Furthermore, these methods only support trajectory control in a single format, limiting their applicability in diverse scenarios. Additionally, there is no publicly available dataset or benchmark specifically tailored for trajectory-controllable video generation, hindering robust training and systematic evaluation. To address these challenges, we introduce MagicMotion, a novel image-to-video generation framework that enables trajectory control through three levels of conditions from dense to sparse: masks, bounding boxes, and sparse boxes. Given an input image and trajectories, MagicMotion seamlessly animates objects along defined trajectories while maintaining object consistency and visual quality. Furthermore, we present MagicData, a large-scale trajectory-controlled video dataset, along with an automated pipeline for annotation and filtering. We also introduce MagicBench, a comprehensive benchmark that assesses both video quality and trajectory control accuracy across different numbers of objects. Extensive experiments demonstrate that MagicMotion outperforms previous methods across various metrics.
π£ Updates
2025/07/28π₯π₯MagicData has been releasedhere. Welcome to use our dataset!2025/06/26π₯π₯MagicMotion has been accepted by ICCV2025!πππ2025/03/28π₯π₯We released interactive demo with gradio for MagicMotion.2025/03/27MagicMotion can now perform inference on a single 4090 GPU (with less than 24GB of GPU memory).2025/03/21π₯π₯We released MagicMotion, including inference code and model weights.
π Table of Contents
- π‘ Abstract
- π£ Updates
- π Table of Contents
- β TODO List
- π Installation
- π¦ Model Weights
- π Inference
- π₯οΈ Gradio Demo
- π€ Acknowledgements
- π Contact
β TODO List
- Release our inference code and model weights
- Release gradio demo
- Release MagicData
- Release MagicBench and evaluation code
- Release our training code
π Installation
# Clone this repository.
git clone https://github.com/quanhaol/MagicMotion
cd MagicMotion
# Install requirements
conda env create -n magicmotion --file environment.yml
conda activate magicmotion
pip install git+https://github.com/huggingface/diffusers
# Install Grounded_SAM2
cd trajectory_construction/Grounded_SAM2
pip install -e .
pip install --no-build-isolation -e grounding_dino
# Optional: For image editing
pip install git+https://github.com/huggingface/image_gen_aux
π¦ Model Weights
Folder Structure
MagicMotion
βββ ckpts
βββ stage1
β βββ mask.pt
βββ stage2
β βββ box.pt
β βββ box_perception_head.pt
βββ stage3
β βββ sparse_box.pt
β βββ sparse_box_perception_head.pt
Download Links
pip install "huggingface_hub[hf_transfer]"
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download quanhaol/MagicMotion --local-dir ckpts
π Inference
Inference requires only 23GB of GPU memory (tested on a single 24GB NVIDIA GeForce RTX 4090 GPU).
If you have sufficient GPU memory, you can modify magicmotion/inference.py to improve runtime performance:
# Optimized setting (for GPUs with sufficient memory)
pipe.to("cuda")
# pipe.enable_sequential_cpu_offload()
Note: Using the optimized setting can reduce runtime by up to 2x.
Scripts
# Demo inference script of each stage (Input Image & Trajectory already provided)
bash magicmotion/scripts/inference/inference_mask.sh
bash magicmotion/scripts/inference/inference_box.sh
bash magicmotion/scripts/inference/inference_sparse_box.sh
# You an also construct trajectory for each stage by yourself -- See MagicMotion/trajectory_construction for more details
python trajectory_construction/plan_mask.py
python trajectory_construction/plan_box.py
python trajectory_construction/plan_sparse_box.py
# Optional: Use FLUX to generate input image by text-to-image generation or image editing -- See MagicMotion/first_frame_generation for more details
python first_frame_generation/t2i_flux.py
python first_frame_generation/edit_image_flux.py
π₯οΈ Gradio Demo
Usage:
bash magicmotion/scripts/app/app.sh

π€ Acknowledgements
We would like to express our gratitude to the following open-source projects that have been instrumental in the development of our project:
- CogVideo: An open source video generation framework by THUKEG.
- Open-Sora: An open source video generation framework by HPC-AI Tech.
- finetrainers: A Memory-optimized training library for diffusion models.
Special thanks to the contributors of these libraries for their hard work and dedication!
π Contact
If you have any suggestions or find our work helpful, feel free to contact us
Email: liqh24@m.fudan.edu.cn or zhenxingfd@gmail.com
If you find our work useful, please consider giving a star to this github repository and citing it:
@article{li2025magicmotion,
title={MagicMotion: Controllable Video Generation with Dense-to-Sparse Trajectory Guidance},
author={Li, Quanhao and Xing, Zhen and Wang, Rui and Zhang, Hui and Dai, Qi and Wu, Zuxuan},
journal={arXiv preprint arXiv:2503.16421},
year={2025}
}
- Downloads last month
- 15