"A robot arm folding a cloth"
"Two hands grapsing a chair"
"A robot arm opening a drawer"
"A hand opening a trashcan"
"A robot arm moving the corn to the right side of table"
"A hand picking up a kettle"
"A robot arm unfolding a cloth"
"A hand putting down a laptop"
Generating interaction-centric videos, such as those depicting humans or robots interacting with objects, is crucial for embodied intelligence, as they provide rich and diverse visual priors for robot learning, manipulation policy training, and affordance reasoning. However, existing methods often struggle to model such complex and dynamic interactions. While recent studies show that masks can serve as effective control signals and enhance generation quality, obtaining dense and precise mask annotations remains a major challenge for real-world use.
To overcome this limitation, we introduce Mask2IV, a novel framework specifically designed for interaction-centric video generation. It adopts a decoupled two-stage pipeline that first predicts plausible motion trajectories for both actor and object, then generates a video conditioned on these trajectories. This design eliminates the need for dense mask inputs from users while preserving the flexibility to manipulate the interaction process. Furthermore, Mask2IV supports versatile and intuitive control, allowing users to specify the target object of interaction and guide the motion trajectory through action descriptions or spatial position cues.
To support systematic training and evaluation, we curate two benchmarks covering diverse action and object categories across both human-object interaction and robotic manipulation scenarios. Extensive experiments demonstrate that our method achieves superior visual realism and controllability compared to existing baselines.
Mask2IV consists of two stages: Interaction Trajectory Generation and Trajectory-conditioned Video Generation. The first stage produces a mask-based interaction trajectory, while the second stage synthesizes a video conditioned on the predicted trajectory.
Hand-mask-controlled video generation methods, such as CosHand and InterDyn, require users to provide dense hand mask sequences as input. In contrast, Mask2IV autonomously generates trajectories for both hands and objects without manual annotation, and can adaptively produce different trajectories based on the specified object.
Same input image with different text prompts
Same input image with different position masks
Same input image with different target objects
@article{li2025mask2iv,
title = {Mask2IV: Interaction-Centric Video Generation via Mask Trajectories},
author = {Li, Gen and Zhao, Bo and Yang, Jianfei and Sevilla-Lara, Laura},
journal = {arXiv preprint arXiv:2510.03135},
year = {2025},
}