In recent years, the introduction of Multi-modal Large Language Models (MLLMs) into video understanding tasks has become increasingly prevalent. However, how to effectively integrate temporal information remains a critical research focus. Traditional approaches treat spatial and temporal information separately. Due to issues like motion blur, it is challenging to accurately represent the spatial information of rapidly moving objects. This can lead to temporally important regions being underemphasized during spatial feature extraction, which in turn hinders accurate spatio-temporal interaction and video understanding. To address this limitation, we propose an innovative video representation method called Dynamic-Image (DynImg). Specifically, we introduce a set of non-key frames as temporal prompts to highlight the spatial areas containing fast-moving objects. During the process of visual feature extraction, these prompts guide the model to pay additional attention to the fine-grained spatial features corresponding to these regions. Moreover, to maintain the correct sequence for DynImg, we employ a corresponding 4D video Rotary Position Embedding. This retains both the temporal and spatial adjacency of DynImg, helping MLLM understand the spatio-temporal order within this combined format. Experimental evaluations reveal that DynImg surpasses the state-of-the-art methods by approximately 2% across multiple video understanding benchmarks, proving the effectiveness of our temporal prompts in enhancing video comprehension.
Overall architecture of DynImg. Videos are decomposed into keyframes and non-keyframes. Several non-keyframes serve as temporal prompts and are combined with keyframes to form the representation of the DynImg, along with its corresponding 4D positional embeddings. Within the image encoder, these temporal prompts adjust the spatial attention. The attention map on the left belongs to the patch of the left hand in the keyframe. Red arrows indicate the usual influence of local visual features, while yellow arrows show the emphasis to dynamic regions by non-keyframes. The output feature of the image encoder, after passing through a projection layer, is fed into the LLM along with the positional embeddings for the final output.
If you use our work in your research, please cite:
@article{bao2025DynImg,
title={DynImg: Key Frames with Visual Prompts are Good Representation for Multi-Modal Video Understanding},
author={Bao, Xiaoyi and Xie, Chenwei and Tang, Hao and Weng, Tingyu Weng and Wang, Xiaofeng and Zheng, Yun and Wang, Xingang},
journal={arXiv preprint arXiv:2507.15569},
year={2025}
}