Mask2IV: Interaction-Centric Video Generation via Mask Trajectories

1Nanyang Technological University  2Shanghai Jiao Tong University  3University of Edinburgh

Mask2IV synthesizes videos of human hands or robot arms interacting with a specified object, indicated by an input mask. It first predicts a
mask-based interaction trajectory (visualized in the top-left corner of each frame), and then generates the video guided by this trajectory.
The generation process is conditioned on either a text prompt or a target position mask.


We focus on the task of interaction-centric video generation, targeting realistic
and controllable synthesis of human-object and robot-object interactions.

Abstract

Generating interaction-centric videos, such as those depicting humans or robots interacting with objects, is crucial for embodied intelligence, as they provide rich and diverse visual priors for robot learning, manipulation policy training, and affordance reasoning. However, existing methods often struggle to model such complex and dynamic interactions. While recent studies show that masks can serve as effective control signals and enhance generation quality, obtaining dense and precise mask annotations remains a major challenge for real-world use.

To overcome this limitation, we introduce Mask2IV, a novel framework specifically designed for interaction-centric video generation. It adopts a decoupled two-stage pipeline that first predicts plausible motion trajectories for both actor and object, then generates a video conditioned on these trajectories. This design eliminates the need for dense mask inputs from users while preserving the flexibility to manipulate the interaction process. Furthermore, Mask2IV supports versatile and intuitive control, allowing users to specify the target object of interaction and guide the motion trajectory through action descriptions or spatial position cues.

To support systematic training and evaluation, we curate two benchmarks covering diverse action and object categories across both human-object interaction and robotic manipulation scenarios. Extensive experiments demonstrate that our method achieves superior visual realism and controllability compared to existing baselines.

Mask2IV Framework


Interpolate start reference image.

Mask2IV consists of two stages: Interaction Trajectory Generation and Trajectory-conditioned Video Generation. The first stage produces a mask-based interaction trajectory, while the second stage synthesizes a video conditioned on the predicted trajectory.



Control Signal Acquisition


Interpolate start reference image.

Hand-mask-controlled video generation methods, such as CosHand and InterDyn, require users to provide dense hand mask sequences as input. In contrast, Mask2IV autonomously generates trajectories for both hands and objects without manual annotation, and can adaptively produce different trajectories based on the specified object.

Results



Text-Conditioned Generation

Same input image with different text prompts



Position-Conditioned Generation

Same input image with different position masks



Target-Specific Generation

Same input image with different target objects



Comparison with Baselines

BibTeX

@article{li2025mask2iv,
      title     = {Mask2IV: Interaction-Centric Video Generation via Mask Trajectories}, 
      author    = {Li, Gen and Zhao, Bo and Yang, Jianfei and Sevilla-Lara, Laura},
      journal   = {arXiv preprint arXiv:2510.03135},
      year      = {2025},
    }