Personal Profile
Ph.D. from University of Technology Sydney in 2016; joined South China University of Technology the same year as an Associate Research Fellow; promoted to Research Fellow (full professorial rank) in 2021 through exceptional promotion. Research interests lie in computer vision and embodied intelligence, focusing on understanding and generating human interaction motions for applications in intelligent robotics and virtual digital humans. Published over 60 papers in top-tier journals and conferences such as TPAMI, IJCV, CVPR, including 5 ESI highly cited papers as first author. In the past five years, supervised graduate students to publish over 30 papers as first authors in top journals IEEE T-PAMI, T-IP, IJCV and top conferences CVPR, ICCV, ECCV, ACM MM.
Selected for national-level youth talent programs and Guangdong Provincial Outstanding Young Scientists. Research work received the First Prize of Guangdong Provincial Natural Science Award in 2023, the IEEE Signal Processing Society (SPS) Best Young Author Paper Award in 2023 (only 3 papers selected from thousands published in flagship journals such as TIP/TSP/TIFS/TASLP), the World's Top 2% Scientists title by Stanford University, and the 2021 ACM Guangzhou Chapter Rising Star Award. Champion of the Video-based Face Recognition Challenge hosted by the U.S. National Institute of Standards and Technology (NIST), the Brain Tumor Segmentation Competition at MICCAI, and the dual championships of the Action Recognition and Action Anticipation Challenges at CVPR.
Contact: chxding@scut.edu.cn
Education Background
2016.11-2021.08 South China University of Technology, Associate Research Fellow
2021.09-present South China University of Technology, Research Fellow (exceptionally promoted)
Awards and Honors
[1] National-level Youth Talent
[2] TCL Young Scholar, 2024
[3] First Prize of Guangdong Provincial Natural Science Award (the only one in the information field; ranked #3), 2023
[4] IEEE Signal Processing Society (SPS) Best Young Author Paper Award (sole corresponding author), 2023
[5] Stanford University's World's Top 2% Scientists, 2022-present
[6] Champion, CVPR EPIC-Kitchens Action Anticipation Challenge, 2022
[7] ACM China Rising Star Award (Guangzhou Chapter), 2021
[8] Champion, CVPR EPIC-Kitchens Action Recognition Challenge, 2021
[9] Champion, MICCAI BraTS Brain Tumor Segmentation Challenge, 2019
[10] Champion, NIST PaSC Video-based Face Recognition Challenge, 2016
Academic Service
[1] Editorial Board Member, Neural Networks (CAS Zone 1 journal)
[2] Deputy Chair, Youth Committee, Guangdong Provincial Society of Image and Graphics
[3] Peer reviewer for top-tier journals and conferences including IEEE TPAMI, IJCV, TIP, TVCG, TNNLS, TIFS, TMM, TCSVT, TITS, TII, TAI, TMECH, ACM Computing Surveys, CVPR, ICCV, ECCV, NeurIPS, AAAI, and ACM MM
Research Area
Computer Vision, Embodied Intelligence, Multimodal Learning
Research Projects
[1] National Natural Science Foundation of China (NSFC) General Program, 510,000 RMB, 2025-2028, Principal Investigator
[2] National Natural Science Foundation of China (NSFC) General Program, 590,000 RMB, 2021-2024, Principal Investigator
[3] National Natural Science Foundation of China (NSFC) Young Scientists Fund, 240,000 RMB, 2018-2020, Principal Investigator
[4] Guangdong Provincial "Pearl River Talent Team" Project, 20,000,000 RMB, 2018-2023, Core Member (Second), Project Task Leader
[5] Guangdong Provincial "Outstanding Youth Fund", 1,000,000 RMB, 2024-2027, Principal Investigator
[6] Guangdong Provincial "Youth Top-Notch Talent" Program, 1,500,000 RMB, 2018-2023, Principal Investigator
[7] Guangzhou Science and Technology Program, 200,000 RMB, 2018-2021, Principal Investigator
[8] South China University of Technology "Outstanding Youth Fund", 500,000 RMB, 2019-2021, Principal Investigator (the only award in the AI field that year)
[9] Pazhou Laboratory Young Scholar Project, 800,000 RMB, 2021-2022, Principal Investigator
[10] CCF-Baidu "Songguo" Research Fund, 2021-2022, Principal Investigator
[11] CAAI-Huawei MindSpore Academic Incentive Fund, 2022-2023, Principal Investigator
[12] Tencent Rhino-Bird Project, 2024-2025, Principal Investigator
Selected Recent Publications
Representative papers where the author is first author or has supervised students in the research group as first authors. * denotes corresponding author. CCF A-class refers to the most prestigious journals and conferences in the field of computer science.PDFs of the following papers (including contact emails of all authors) are mostly available on arXiv or can be found via search engines (e.g., Google or Baidu) using the paper titles: https://arxiv.org/
[1] Jiayu Jiang, Changxing Ding*, Wentao Tan, Junhong Wang, Jin Tao, Xiangmin Xu, “Modeling Thousands of Human Annotators for Generalizable Text-to-Image Person Re-identification”, CVPR, 2025 (CCF A).
[2] Guowei Wang, Changxing Ding*, “Effortless Active Labeling for Long-Term Test-Time Adaptation”, CVPR, 2025 (CCF A).
[3] Mengqing Xue, Yifei Liu, Ling Guo, Shaoli Huang, Changxing Ding*, “Guiding Human-Object Interactions with Rich Geometry and Relations”, CVPR, 2025 (CCF A).
[4] Yupeng Hu, Changxing Ding*, Chang Sun, etc., Bilateral Collaboration with Large Vision-Language Models for Open Vocabulary Human-Object Interaction Detection, ICCV, 2025 (CCF A).
[5] Xu Yang, Shaoli Huang, Shenbo Xie, Xuelin Chen, Yifei Liu, Changxing Ding*, Democratizing High-Fidelity Co-Speech Gesture Video Generation, ICCV, 2025 (CCF A).
[6] Yiyang Chen, Shanshan Zhao, Lunhao Duan, Changxing Ding*, Dacheng Tao, Harnessing Text-to-Image Diffusion Models for Point Cloud Self-Supervised Learning, ICCV, 2025 (CCF A).
[7] Guowei Wang, Changxing Ding*, Wentao Tan, Mingkui Tan, “Decoupled Prototype Learning for Reliable Test-Time Adaptation”, IEEE Trans. Multimedia (T-MM), 2025.
[8] Xu Yang, Changxing Ding*, Zhibin Hong, Junhao Huang, Jin Tao, Xiangmin Xu, Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On, CVPR, 2024 (CCF A).
[9] Wentao Tan, Changxing Ding*, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao, Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID, CVPR, 2024 (CCF A).
[10] Zhuolong Li, Xingao Li, Changxing Ding*, Xiangmin Xu, Disentangled Pre-training for Human-Object Interaction Detection, CVPR, 2024 (CCF A).
[11] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, Changxing Ding*, Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR, 2024 (CCF A).
[12] Yiyang Chen, Lunhao Duan, Shanshan Zhao, Changxing Ding*, Dacheng Tao, Local-Consistent Transformation Learning for Rotation-invariant Point Cloud Analysis, CVPR, 2024 (CCF A).
[13] Zengsheng Kuang, Changxing Ding*, Huanyao Yao, “Learning Context with Priors for 3D Interacting Hand-Object Pose Estimation”, ACM MM, 2024, (CCF A).
[14] Huan Yao, Changxing Ding*, Xuanda Xu, Zhifeng Lin, “Decoupling Heterogeneous Features for Robust 3D Interacting Hand Poses Estimation”, ACM MM, 2024, (CCF A).
[15] Yiyang Chen, Shanshan Zhao, Changxing Ding*, et al., Cross-Modal & Cross-Domain Learning for Unsupervised LiDAR Semantic Segmentation, ACM MM, 2023 (CCF A).
[16] Zhiyin Shao, Xinyu Zhang, Changxing Ding*, Jian Wang, Jingdong Wang, Unified Pre-training with Pseudo Texts for Text-To-Image Person Re-identification, ICCV, 2023. (CCF A)
[17] Wentao Tan, Changxing Ding*, Pengfei Wang, Mingming Gong, Kui Jia, Style Interleaved Learning for Generalizable Person Re-identification, IEEE Trans. Multimedia (T-MM), 2023.
[18] Zhifeng Lin, Changxing Ding*, Huan Yao, Zengsheng Kuang, Shaoli Huang, “Harmonious Feature Learning for Interactive Hand-Object Pose Estimation”, CVPR, 2023. (CCF A)
[19] Shengcong Chen, Changxing Ding*, Dacheng Tao, Hao Chen, DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus Segmentation, MICCAI, 2023. (top conference in medical image processing)
[20] Kan Wang, Changxing Ding*, Xinjian Pang, Xiangmin Xu, “Context Sensing Attention Network for Video-based Person Re-identification”, ACM Transactions on Multimedia Computing Communications and Applications (TOMM), 2023.
[21] Shengcong Chen, Changxing Ding*, Minfeng Liu, Jun Cheng, Dacheng Tao, “CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation”, IEEE Trans. Image Processing (T-IP), 2023. (CCF A)
[22] Xubin Zhong, Changxing Ding*, Zijian Li, Shaoli Huang, Towards Hard-Positive Query Mining for DETR-based Human-Object Interaction Detection, ECCV, 2022. (one of the three top conferences in computer vision)
[23] Zhiyin Shao, Xinyu Zhang, Meng Fang, Zhifeng Lin, Jian Wang, Changxing Ding*, Learning Granularity-Unified Representations for Text-to-Image Person Re-identification, ACM MM, 2022. (CCF A)
[24] Xian Qu, Changxing Ding*, Xingao Li, and etc., “Distillation Using Oracle Queries for Transformer-based Human-Object Interaction Detection”, CVPR, 2022. (CCF A)
[25] Xin Lin, Changxing Ding*, Jing Zhang, etc., “RU-Net: Regularized Unrolling Network for Scene Graph Generation”, CVPR, 2022. (CCF A)
[26] Xin Lin, Changxing Ding*, Yibing Zhan, etc., “HL-Net: Heterophily Learning Network for Scene Graph Generation”, CVPR, 2022. (CCF A)
[27] Pengfei Wang, Changxing Ding*, Wentao Tan, etc., “Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification”, IEEE Trans. Multimedia (T-MM), 2022.
[28] Pengfei Wang, Changxing Ding*, Zhiyin Shao, etc., “Quality-aware Part Models for Occluded Person Re-identification”, IEEE Trans. Multimedia (T-MM), 2022.
[29] Xubin Zhong, Changxing Ding*, and etc.,“Polysemy Deciphering Network for Robust Human-Object Interaction Detection,” International Journal of Computer Vision (IJCV), 2021. (CCF A)
[30] Kan Wang, Pengfei Wang, Changxing Ding*, and etc., “Batch Coherence-Driven Network for Part-Aware Person Re-Identification,” IEEE Trans. Image Processing (T-IP), 2021. (CCF A)
[31] Xubin Zhong, Xian Qu, Changxing Ding*, and etc.,“Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection”, CVPR, 2021. (CCF A类)
[32] Tong Zhou, Changxing Ding*, and etc., “Learning Oracle Attention for High-fidelity Face Completion”, CVPR, 2020. (CCF A)
[33] Xubin Zhong, Changxing Ding*, and etc.,“Polysemy Deciphering Network for Human-Object Interaction Detection,” ECCV, 2020. (one of the three top conferences in computer vision)
[34] Xin Lin, Changxing Ding*, and etc., “GPS-Net: Graph Property Sensing Network for Scene Graph Generation”, CVPR Oral, 2020. (CCF A)
[35] Shengcong Chen, Changxing Ding*, and etc., Boundary-assisted Region Proposal Networks for Nucleus Segmentation, MICCAI, 2020. (top conference in medical image processing)
[36] Chenhong Zhou, Changxing Ding*, and etc., “One-pass multi-task networks with cross-task guided attention for brain tumor segmentation”, IEEE Trans. Image Processing (T-IP), 2020. (CCF A)
[37] Kan Wang, Changxing Ding*, and etc., CDPM: convolutional deformable part models for semantically aligned person re-identification, IEEE Trans. Image Processing (T-IP), 2020. (CCF A)
[38] Changxing Ding*, Kan Wang, and etc., “Multi-task Learning with Coarse Priors for Robust Part-aware Person Re-identification”, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2022. (CCF A)
[39] Changxing Ding*, Dacheng Tao, Trunk-branch ensemble convolutional neural networks for video-based face recognition, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2018. (CCF A)
[40]Changxing Ding, Jongyun Choi, and etc., Multi-directional multi-level dual-cross patterns for robust face recognition, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2016. (CCF A)
Courses Taught
"Machine Learning", Undergraduate Course
"Deep Learning and Computer Vision", Undergraduate Course
"Machine Learning", Graduate Full-English Course
Admissions Information
I lead the Multimodal Perception and Interaction Laboratory at South China University of Technology. The laboratory's research focuses on computer vision and embodied intelligence, with an emphasis on understanding and generating human interaction motions—including detection, reconstruction, and generation of interactive movements—for applications in intelligent robotics and virtual digital humans.
I provide intensive one-on-one academic supervision for my master's and Ph.D. students, actively encourage undergraduate, master's, and Ph.D. students to attend domestic and international academic conferences, and support them in securing research internships at leading technology companies. The lab has ample funding and GPU computing resources to support students' research and living expenses, and provides additional bonuses for outstanding performance. Research assistants are available in the lab, ensuring that graduate students are not required to perform any non-research-related tasks.
The lab welcomes highly motivated students who are genuinely interested in computer vision and artificial intelligence, possess a strong work ethic, and demonstrate self-driven initiative. Prospective students interested in our research directions are encouraged to contact me by email, attaching their CV, academic transcript, and key achievements. Additionally, the lab is continuously seeking postdoctoral researchers. Interested candidates (students or faculty) may refer to the following recruitment announcement: https://mp.weixin.qq.com/s/i7W47DgBBZ8qFVvcamadYA

