My research interests include computer vision and deep learning, particularly focusing on video understanding and video-language multimodal learning. The main objective of my research is to enable machines to understand high-level instances as humans, including cognitive emotion and geometric primitive. You can find my CV here: Zhicheng Zhang’s Curriculum Vitae and Webpage.
For our works, we provide interesting demos for fun. Some examples are listed and you can also upload yours. Moreover, please feel free to make any suggestions. You can contact with me in following ways:
|Three papers for video generation, video emotion analysis, and camouflaged image generation are accepted by CVPR 2024
|I’m going to Paris for ICCV 2023.
|One paper for plane tracking is accepted by ICCV 2023
|I receive the SK AI Innovation Scholarship from SK
|One paper for plane segmentation is accepted by TNNLS 2023
|One paper for video emotion analysis is accepted by CVPR 2023
|One paper for temporal sentiment localization is accepted by ACMMM 2022
|I start my PhD studying at Nankai University (NKU) under the supervision of Prof. Jufeng Yang
- ExtDM: Distribution Extrapolation Diffusion Model for Video PredictionIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
- MART: Masked Affective RepresenTation Learning via Masked Temporal Distribution DistillationIn Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
- Multiple Planar Object TrackingIn Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) 2023
- Temporal Sentiment Localization: Listen and Look in Untrimmed VideosIn Proceedings of the 30th ACM International Conference on Multimedia 2022