Visible and Infrared Object Tracking Based on Multimodal Hierarchical Relationship Modeling


  • Rui Yao China Unversity of Mining and Technology
  • Jiazhu Qiu
  • Yong Zhou
  • Zhiwen Shao
  • Bing Liu
  • Jiaqi Zhao
  • Hancheng Zhu



RGBT tracking, Transformer, Multimodal, Feature fusion


Visible RGB and Thermal infrared (RGBT) object tracking has emerged as a prominent area of focus within the realm of computer vision. Nevertheless, the majority of existing RGBT tracking methods, which predominantly rely on Transformers, primarily emphasize the enhancement of features extracted by convolutional neural networks. Unfortunately, the latent potential of Transformers in representation learning has been inadequately explored. Furthermore, most studies tend to overlook the significance of distinguishing between the importance of each modality in the context of multimodal tasks. In this paper, we address these two critical issues by introducing a novel RGBT tracking framework centered on multimodal hierarchical relationship modeling. Through the incorporation of multiple Transformer encoders and the deployment of self-attention mechanisms, we progressively aggregate and fuse multimodal image features at various stages of image feature learning. Throughout the process of multimodal interaction within the network, we employ a dynamic component feature fusion module at the patch-level to dynamically assess the relevance of visible information within each region of the tracking scene. Our extensive experimentation, conducted on benchmark datasets such as RGBT234, GTOT, and LasHeR, substantiates the commendable performance of our proposed approach in terms of accuracy, success rate, and tracking speed.




How to Cite

Yao, R., Qiu, J., Zhou, Y., Shao, Z., Liu, B., Zhao, J., & Zhu, H. (2024). Visible and Infrared Object Tracking Based on Multimodal Hierarchical Relationship Modeling. Image Analysis and Stereology, 43(1), 41–51.



Original Research Paper