Research
My research interests lie in generalizable and trustworthy deep learning with minimal human annotations. Specifically, I'm interested in transfer learning and representation learning, covering but not limited to the following topics:
- How to assure alignment and safety of foundation models such as LLMs?
- Test-time learning: How to ensure safe test-time model adaptation, selection, and calibration with access to only unlabeled testing data?
- Pre-training and self-supervised learning: How to pre-train a generalizable foundation model with massive uncurated data?
- Model-based transfer learning and fine-tuning: How to transfer the knowledge of a pre-trained source model or foundation model to a different downstream domain or task?
- Large-scale empirical studies on deep learning models: How to uncover, understand, and explain the behavior of foundation models?
|
|
UMAD: Universal Model Adaptation under Domain and Category Shift
Jian Liang*,
Dapeng Hu*,
Jiashi Feng,
Ran He
Technical report
arXiv
We proposed a novel and effective method UMAD to tackle realistic open-set domain adaptation tasks where neither source data nor the prior about the label set overlap across domains is available for target domain adaptation.
|
Professional Service
Journal Reviewer: TPAMI, IJCV, TIP, TMLR, TKDE
Conference Reviewer: ICML 2021-2023, NeurIPS 2021-2024, ICLR 2022-2025, CVPR 2022-2025, ICCV 2023, ECCV 2024, AISTATS 2025, AAAI 2025
Head TA, EE6934: Deep Learning (Advanced), 2020 Spring
Head TA, EE5934: Deep Learning, 2020 Spring
TA, EE4704: Image Processing and Analysis, 2019 Fall
TA, EE2028: Microcontroller Programming and Interfacing, 2019 Fall
|
|