丁长兴
职称:研究员

个人简介

2016年博士毕业于悉尼科技大学(University of Technology Sydney),同年加入华南理工大学,任副研究员;2021年破格晋升为研究员(正高级职称)。研究领域为计算机视觉与具身智能,重点解决人类交互运动的理解与生成问题,服务智能机器人和虚拟数字人等应用。在TPAMI、IJCV、CVPR等顶级期刊和会议上发表论文60余篇,其中第一作者ESI高被引论文5篇,近五年指导本课题组研究生以第一作者身份在顶级期刊IEEE T-PAMI、T-IP、IJCV和顶级会议CVPR、ICCV、ECCV、ACM MM上发表论文30余篇。

入选国家级青年人才、广东省杰青;研究工作荣获2023年度广东省自然科学一等奖、2023年度IEEE信号处理学会(SPS)青年作者最佳论文奖(该年度从TIP/TSP/TIFS/TASLP等顶刊的几千篇论文中仅遴选出3篇)、斯坦福大学发布的全球前2%顶尖科学家称号、2021年度ACM中国新星奖(Guangzhou Chapter)。获得美国National Institute of Standards and Technology举办的基于视频的人脸识别挑战赛冠军、顶级会议MICCAI 举办的脑肿瘤分割比赛冠军、顶级会议CVPR举办的动作识别挑战赛和动作预期挑战赛双冠军。

联系方式:chxding@scut.edu.cn

工作经历

2016.11-2021.08   华南理工大学,副研究
2021.09-至今         华南理工大学,研究员(破格)

获奖、荣誉称号

[1] 国家级青年人才

[2] TCL青年学者, 2024

[3] 广东省自然科学一等奖(信息领域仅一项、排名第三), 2023

[4] IEEE信号处理学会(SPS)青年学者最佳论文奖(唯一通讯作者), 2023

[5] 斯坦福大学发布的全球前2%顶尖科学家, 2022-至今
[6] CVPR EPIC-Kitchens动作预期挑战赛冠军, 2022
[7] ACM中国新星奖(Guangzhou Chapter), 2021
[8] CVPR EPIC-Kitchens动作识别挑战赛冠军, 2021
[9] MICCAI BraTS脑肿瘤分割比赛冠军, 2019
[10] NIST PaSC基于视频的人脸识别挑战赛冠军,2016

学术任职

[1] 担任中科院1区期刊Neural Networks编委

[2] 广东省图象图形学学会 青工委副主任

[3] 担任IEEE TPAMI、IJCV、TIP、TVCG、TNNLS、TIFS、TMM、TCSVT、TITS、TII、TAI、TMECH、ACM Computing Surveys和CVPR、ICCV、ECCV、NeurIPS、AAAI、ACMMM 等顶级期刊和会议审稿人

研究领域

计算机视觉、具身智能、多模态学习

承担项目

[1] 国家自然科学基金面上项目,51万,2025-2028,主持;

[2] 国家自然科学基金面上项目,59万,2021-2024,主持;

[3] 国家自然科学基金青年项目,24万,2018-2020,主持;

[4] 广东省“珠江人才团队”,2000万,2018-2023,第二核心成员、课题负责人;

[5] 广东省“杰出青年基金”,100万,2024-2027,主持;

[6] 广东省“青年拔尖人才”,150万,2018-2023,主持;

[7] 广州市科技计划项目,20万,2018-2021,主持;

[8] 华南理工大学“杰出青年基金”,50万,2019-2021,主持(该年度人工智能领域仅一项);

[9] 琶洲实验室青年学者项目,80万,2021-2022,主持;

[10] CCF-百度“松果基金”,2021-2022,主持;

[11] CAAI-华为Mindspore学术奖励基金,2022-2023,主持。

[12] 腾讯犀牛鸟项目,2024-2025,主持。

近期代表性论文

本人一作或指导本课题组学生发表一作代表性论文,*表示通讯作者。CCF A类指的是计算机领域公认最为权威的期刊或会议。
以下论文的pdf(含各位作者的联系邮箱)大多可在arxiv网站或谷歌/百度上按照文章名搜索得到:https://arxiv.org/

[1] Jiayu Jiang, Changxing Ding*, Wentao Tan, Junhong Wang, Jin Tao, Xiangmin Xu, “Modeling Thousands of Human Annotators for Generalizable Text-to-Image Person Re-identification”, CVPR, 2025 (CCF A类).

[2] Guowei Wang, Changxing Ding*, “Effortless Active Labeling for Long-Term Test-Time Adaptation”, CVPR, 2025 (CCF A类).

[3] Mengqing Xue, Yifei Liu, Ling Guo, Shaoli Huang, Changxing Ding*, “Guiding Human-Object Interactions with Rich Geometry and Relations”, CVPR, 2025 (CCF A类).

[4] Guowei Wang, Changxing Ding*, Wentao Tan, Mingkui Tan, “Decoupled Prototype Learning for Reliable Test-Time Adaptation”, IEEE Trans. Multimedia (T-MM), 2025.

[5] Xu Yang, Changxing Ding*, Zhibin Hong, Junhao Huang, Jin Tao, Xiangmin Xu, Texture-Preserving Diffusion Models for High-Fidelity Virtual Try-On, CVPR, 2024 (CCF A类).

[6] Wentao Tan, Changxing Ding*, Jiayu Jiang, Fei Wang, Yibing Zhan, Dapeng Tao, Harnessing the Power of MLLMs for Transferable Text-to-Image Person ReID, CVPR, 2024 (CCF A类).

[7] Zhuolong Li, Xingao Li, Changxing Ding*, Xiangmin Xu, Disentangled Pre-training for Human-Object Interaction Detection, CVPR, 2024 (CCF A类).

[8] Yifei Liu, Qiong Cao, Yandong Wen, Huaiguang Jiang, Changxing Ding*, Towards Variable and Coordinated Holistic Co-Speech Motion Generation, CVPR, 2024 (CCF A类).

[9] Yiyang Chen, Lunhao Duan, Shanshan Zhao, Changxing Ding*, Dacheng Tao, Local-Consistent Transformation Learning for Rotation-invariant Point Cloud Analysis, CVPR, 2024 (CCF A类).

[10] Zengsheng Kuang, Changxing Ding*, Huanyao Yao, “Learning Context with Priors for 3D Interacting Hand-Object Pose Estimation”, ACM MM, 2024, (CCF A类).

[11] Huan Yao, Changxing Ding*, Xuanda Xu, Zhifeng Lin, “Decoupling Heterogeneous Features for Robust 3D Interacting Hand Poses Estimation”, ACM MM, 2024, (CCF A类).

[12] Yiyang Chen, Shanshan Zhao, Changxing Ding*, et al., Cross-Modal & Cross-Domain Learning for Unsupervised LiDAR Semantic Segmentation, ACM MM, 2023 (CCF A类).

[13] Zhiyin Shao, Xinyu Zhang, Changxing Ding*, Jian Wang, Jingdong Wang, Unified Pre-training with Pseudo Texts for Text-To-Image Person Re-identification, ICCV, 2023. (CCF A类)

[14] Wentao Tan, Changxing Ding*, Pengfei Wang, Mingming Gong, Kui Jia, Style Interleaved Learning for Generalizable Person Re-identification, IEEE Trans. Multimedia (T-MM), 2023.

[15] Zhifeng Lin, Changxing Ding*, Huan Yao, Zengsheng Kuang, Shaoli Huang, “Harmonious Feature Learning for Interactive Hand-Object Pose Estimation”, CVPR, 2023. (CCF A类)

[16] Shengcong Chen, Changxing Ding*, Dacheng Tao, Hao Chen, DARC: Distribution-Aware Re-Coloring Model for Generalizable Nucleus Segmentation, MICCAI, 2023.  (医学图像处理顶会)

[17] Kan Wang, Changxing Ding*, Xinjian Pang, Xiangmin Xu, “Context Sensing Attention Network for Video-based Person Re-identification”, ACM Transactions on Multimedia Computing Communications and Applications (TOMM), 2023.

[18] Shengcong Chen, Changxing Ding*, Minfeng Liu, Jun Cheng, Dacheng Tao, “CPP-Net: Context-aware Polygon Proposal Network for Nucleus Segmentation”, IEEE Trans. Image Processing (T-IP), 2023. (CCF A类)

[19] Xubin Zhong, Changxing Ding*, Zijian Li, Shaoli Huang, Towards Hard-Positive Query Mining for DETR-based Human-Object Interaction Detection, ECCV, 2022. (CV领域三大顶会之一)

[20] Zhiyin Shao, Xinyu Zhang, Meng Fang, Zhifeng Lin, Jian Wang, Changxing Ding*, Learning Granularity-Unified Representations for Text-to-Image Person Re-identification, ACM MM, 2022. (CCF A类)

[21] Xian Qu, Changxing Ding*, Xingao Li, and etc., “Distillation Using Oracle Queries for Transformer-based Human-Object Interaction Detection”, CVPR, 2022. (CCF A类)

[22] Xin Lin, Changxing Ding*, Jing Zhang, etc., “RU-Net: Regularized Unrolling Network for Scene Graph Generation”, CVPR, 2022. (CCF A类)

[23] Xin Lin, Changxing Ding*, Yibing Zhan, etc., “HL-Net: Heterophily Learning Network for Scene Graph Generation”, CVPR, 2022. (CCF A类)

[24] Pengfei Wang, Changxing Ding*, Wentao Tan, etc., “Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification”, IEEE Trans. Multimedia (T-MM), 2022.

[25] Pengfei Wang, Changxing Ding*, Zhiyin Shao, etc., “Quality-aware Part Models for Occluded Person Re-identification”, IEEE Trans. Multimedia (T-MM), 2022.

[26] Xubin Zhong, Changxing Ding*, and etc.,“Polysemy Deciphering Network for Robust Human–Object Interaction Detection,” International Journal of Computer Vision (IJCV), 2021. (CCF A类)

[27] Kan Wang, Pengfei Wang, Changxing Ding*, and etc., “Batch Coherence-Driven Network for Part-Aware Person Re-Identification,” IEEE Trans. Image Processing (T-IP), 2021. (CCF A类)

[28] Xubin Zhong, Xian Qu, Changxing Ding*, and etc.,“Glance and Gaze: Inferring Action-aware Points for One-Stage Human-Object Interaction Detection”, CVPR, 2021. (CCF A类)

[29] Tong Zhou, Changxing Ding*, and etc., “Learning Oracle Attention for High-fidelity Face Completion”, CVPR, 2020. (CCF A类)

[30] Xubin Zhong, Changxing Ding*, and etc.,“Polysemy Deciphering Network for Human-Object Interaction Detection,” ECCV, 2020. (CV领域三大顶会之一)

[31] Xin Lin, Changxing Ding*, and etc., “GPS-Net: Graph Property Sensing Network for Scene Graph Generation”, CVPR Oral, 2020. (CCF A类)

[32] Shengcong Chen, Changxing Ding*, and etc., Boundary-assisted Region Proposal Networks for Nucleus Segmentation, MICCAI, 2020. (医学图像处理顶会)

[33] Chenhong Zhou, Changxing Ding*, and etc., “One-pass multi-task networks with cross-task guided attention for brain tumor segmentation”, IEEE Trans. Image Processing (T-IP), 2020. (CCF A类)

[34] Kan Wang, Changxing Ding*, and etc., CDPM: convolutional deformable part models for semantically aligned person re-identification, IEEE Trans. Image Processing (T-IP), 2020. (CCF A类)

[35] Chenhong Zhou, Changxing Ding*, and etc., “One-Pass Multi-task Convolutional Neural Networks for Efficient Brain Tumor Segmentation, MICCAI, 2018. (医学图像处理顶会)

[36] Changxing Ding*, Kan Wang, and etc., “Multi-task Learning with Coarse Priors for Robust Part-aware Person Re-identification”, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2022. (CCF A类)

[37] Changxing Ding*, Dacheng Tao, Trunk-branch ensemble convolutional neural networks for video-based face recognition, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2018. (CCF A类)

[38] Changxing Ding, Jongyun Choi, and etc., Multi-directional multi-level dual-cross patterns for robust face recognition, IEEE Trans. Pattern Anal. Mach. Intell. (T-PAMI), 2016. (CCF A类)

所授课程

《机器学习》,本科生、研究生课程

《深度学习与计算机视觉》,本科生课程

《Machine Learning》,研究生全英课程

关于招生

本人领导华南理工大学“多模态感知与交互”实验室(Multimodal Perception and Interaction Laboratory),实验室目前有青年教师两名,含教授一名、副教授一名。研究领域为计算机视觉与具身智能,重点解决人类交互运动的理解与生成问题,包括交互运动的检测、重建与生成,服务智能机器人和虚拟数字人等应用。

实验室长期招收博士后、博士与硕士研究生,同时欢迎本科同学加入实验室开展科研培训。实验室有充足的经费与GPU硬件设备支撑学生的学习与生活,每月发放的科研补助完全可以满足研究生正常的生活需求,并为表现出色的同学发放额外奖励;实验室鼓励硕博学生参与国内外学术交流,并提供了大量支持;实验室配备科研助理,无需研究生做任何与科研无关的杂事。

实验室只招收对计算机视觉和人工智能技术真正感兴趣、有刻苦钻研精神和较强自驱力的学生。欢迎对实验室研究方向感兴趣的同学邮件联系,并在邮件中附上个人简历、成绩单、主要成果等。同时,实验室长期招聘青年教师、博士后,感兴趣的老师或同学可参考如下招聘信息:  

青年教师招聘:https://mp.weixin.qq.com/s/iS9KSfhXCjM9CDVeH8oG2g     

博士后招聘:https://mp.weixin.qq.com/s/i7W47DgBBZ8qFVvcamadYA