Dapeng Hu

I am currently a research scientist at A*STAR's Centre for Frontier AI Research (CFAR).

I completed my Ph.D. in June 2023 at the National University of Singapore (NUS). My thesis supervisor is Xinchao Wang. During my Ph.D., my research focused on transfer learning and representation learning for computer vision tasks.

Before my Ph.D. studies, I was an electronic engineering major in my bachelor's and graduated with distinction from Nanjing University.

Email  /  Google Scholar  /  LinkedIn

profile photo

My research interests lie in computer vision and deep learning. Specifically, I'm interested in generalizable and label-efficient deep learning, covering but not limited to the following topics:

  • Domain adaptation and semi-supervised learning: How to leverage unlabeled data for training a task-specific model?
  • Representation learning and self-supervised learning: How to pre-train a generalized foundation model?
  • Model-based transfer learning and fine-tuning: How to transfer the knowledge in a pre-trained source model or foundation model to a different domain or task?
  • Large-scale empirical studies on deep learning models: How to observe, understand, and explain the behavior of foundation models?
Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation
Dapeng Hu, Jian Liang, Jun Hao Liew, Chuhui Xue, Song Bai, Xinchao Wang
Advances in Neural Information Processing Systems (NeurIPS), 2023.

We proposed MixVal, a novel target-only validation method for unsupervised domain adaptation with state-of-the-art performance and improved stability.

PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation
Dapeng Hu, Jian Liang, Xinchao Wang, Chuan-Sheng Foo
Under review

We proposed PseudoCal, a source-free calibration method for unsupervised domain adaptation, which outperforms existing methods in reducing calibration error across 10 UDA methods.

UMAD: Universal Model Adaptation under Domain and Category Shift
Jian Liang*, Dapeng Hu*, Jiashi Feng, Ran He
Under review

We proposed a novel and effective method UMAD to tackle realistic open-set domain adaptation tasks where neither source data nor the prior about the label set overlap across domains is available for target domain adaptation.

DINE: Domain Adaptation from Single and Multiple Black-box Predictors
Jian Liang, Dapeng Hu, Jiashi Feng, Ran He
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Oral
arXiv  /  code

We studied a realistic and challenging domain adaptation problem and proposed a safe and efficient adaptation framework (DINE) with only black-box predictors provided from source domains.

How Well Does Self-Supervised Pre-Training Perform with Streaming Data?
Dapeng Hu*, Shipeng Yan*, Qizhengqiu Lu, Lanqing Hong, Hailin Hu, Yifan Zhang, Zhenguo Li, Xinchao Wang, Jiashi Feng
International Conference on Learning Representations (ICLR), 2022.

We conducted the first thorough empirical evaluation to investigate how well self-supervised learning (SSL) performs with various streaming data types and diverse downstream tasks.

Adversarial Domain Adaptation With Prototype-Based Normalized Output Conditioner
Dapeng Hu, Jian Liang, Hanshu Yan, Qibin Hou, Yunpeng Chen
IEEE Transactions on Image Processing (TIP), Volume 30, 2021.
arXiv  /  code

We proposed a novel, efficient, and generic conditional domain adversarial training method NOUN to solve domain adaptation tasks for both image classification and segmentation.

Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
Jian Liang, Dapeng Hu, Yunbo Wang, Ran He, Jiashi Feng
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021.
arXiv  /  code

We enhanced SHOT to SHOT++ with an intra-domain labeling transfer strategy and achieved even better source data-free adaptation results than state-of-the-art data-dependent results.

No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data
Mi Luo, Fei Chen, Dapeng Hu, Yifan Zhang, Jian Liang, Jiashi Feng
Advances in Neural Information Processing Systems (NeurIPS), 2021.

We proposed CCVR, a simple and universal classifier calibration algorithm for federated learning.

Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning
Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng
Advances in Neural Information Processing Systems (NeurIPS), 2021.
arXiv  /  code

We proposed a theoretically and empirically promising Core-tuning method for fine-tuning contrastive self-supervised models.

Domain Adaptation with Auxiliary Target Domain-Oriented Classifier
Jian Liang, Dapeng Hu, Jiashi Feng
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
arXiv  /  code

We proposed ATDOC, a simple yet effective framework to combat classifier bias that provided a novel perspective addressing domain shift.

Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
Jian Liang, Dapeng Hu, Jiashi Feng
International Conference on Machine Learning (ICML), 2020.
arXiv  /  code

We were among the first to look into a practical unsupervised domain adaptation setting called "source-free" DA and proposed a simple yet generic representation learning framework named SHOT.

A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation
Jian Liang, Yunbo Wang, Dapeng Hu, Ran He, Jiashi Feng
European Conference on Computer Vision (ECCV), 2020.
arXiv  /  code

We tackled partial domain adaptation by augmenting the target domain and transforming it into an unsupervised domain adaptation problem

Professional Service

Journal Reviewer: TMLR 2022, TPAMI 2022, TKDE 2022

Conference Reviewer: ICML 2021-2023, NeurIPS 2021-2023, CVPR 2022-2023, ICCV 2023, ICLR 2022-2024

Head TA, EE6934: Deep Learning (Advanced), 2020 Spring

Head TA, EE5934: Deep Learning, 2020 Spring

TA, EE4704: Image Processing and Analysis, 2019 Fall

TA, EE2028: Microcontroller Programming and Interfacing, 2019 Fall

Awards and Honors