DynImg: Key Frames with Visual Prompts are Good Representation for Multi-Modal Video Understanding

CASIA1,  Alibaba Group2,  Peking University3 

News: Accepted by ICCV2025!


Structure Comparison between the previous methods for video understanding and DynImg. Previous models use processed visual features for subsequent spatial and temporal merging modules, either in parallel (a) or in sequence (b). However, in rapidly moving scenarios like the example on the right, where the little girl quickly turns back and moves in the last frame, these models fail to capture the crucial details of her motion. Factors such as motion blur lead to these important temporal details being overlooked during the visual feature extraction process, resulting in these areas not receiving the necessary attention. Spatio-temporal interaction based on such inaccurate features or tokens is ineffective. In contrast, our proposed DynImg (c) advances the spatio-temporal interaction process via temporal prompts. This enables the model to focus on those rapidly moving regions that are difficult to capture during the feature extraction phase.

Abstract

In recent years, the introduction of Multi-modal Large Language Models (MLLMs) into video understanding tasks has become increasingly prevalent. However, how to effectively integrate temporal information remains a critical research focus. Traditional approaches treat spatial and temporal information separately. Due to issues like motion blur, it is challenging to accurately represent the spatial information of rapidly moving objects. This can lead to temporally important regions being underemphasized during spatial feature extraction, which in turn hinders accurate spatio-temporal interaction and video understanding. To address this limitation, we propose an innovative video representation method called Dynamic-Image (DynImg). Specifically, we introduce a set of non-key frames as temporal prompts to highlight the spatial areas containing fast-moving objects. During the process of visual feature extraction, these prompts guide the model to pay additional attention to the fine-grained spatial features corresponding to these regions. Moreover, to maintain the correct sequence for DynImg, we employ a corresponding 4D video Rotary Position Embedding. This retains both the temporal and spatial adjacency of DynImg, helping MLLM understand the spatio-temporal order within this combined format. Experimental evaluations reveal that DynImg surpasses the state-of-the-art methods by approximately 2% across multiple video understanding benchmarks, proving the effectiveness of our temporal prompts in enhancing video comprehension.

Method

Overall architecture of DynImg. Videos are decomposed into keyframes and non-keyframes. Several non-keyframes serve as temporal prompts and are combined with keyframes to form the representation of the DynImg, along with its corresponding 4D positional embeddings. Within the image encoder, these temporal prompts adjust the spatial attention. The attention map on the left belongs to the patch of the left hand in the keyframe. Red arrows indicate the usual influence of local visual features, while yellow arrows show the emphasis to dynamic regions by non-keyframes. The output feature of the image encoder, after passing through a projection layer, is fed into the LLM along with the positional embeddings for the final output.

Results

Performance comparison between DynImg and other methods on five open-ended video understanding benchmarks. ''Encoder'' refers to the type of visual encoder used during training.


Effectiveness and efficiency comparison. ''Accuracy'' is the average accuracy on MSVD, MSRVTT, Activitynet, and TGIF. ''Token Efficiency'' is negatively correlated with the number of visual tokens used to represent the video.


BibTeX

If you use our work in your research, please cite:

@article{bao2025DynImg,
  title={DynImg: Key Frames with Visual Prompts are Good Representation for Multi-Modal Video Understanding},
  author={Bao, Xiaoyi and Xie, Chenwei and Tang, Hao and Weng, Tingyu Weng and Wang, Xiaofeng and Zheng, Yun and Wang, Xingang},
  journal={arXiv preprint arXiv:2507.15569},
  year={2025}
}