no code implementations • 28 May 2024 • Ting Wang, Zipei Yan, Jizhou Li, XiLe Zhao, Chao Wang, Michael Ng
This approach enables us to harness both the low rankness from the matrix factorization and the continuity from neural representation in a self-supervised manner.
1 code implementation • 3 Jan 2024 • Zipei Yan, Zhengji Liu, Jizhou Li
Implicit Neural Representation (INR) has emerged as an effective method for unsupervised image denoising.
no code implementations • 13 May 2023 • Shuai Wang, Zipei Yan, Daoan Zhang, Zhongsen Li, Sirui Wu, Wenxuan Chen, Rui Li
In contrast, the IID hypothesis is not universally guaranteed in numerous real-world applications, especially in medical image analysis.
no code implementations • 13 May 2023 • Shuai Wang, Daoan Zhang, Zipei Yan, Shitong Shao, Rui Li
In Stage \uppercase\expandafter{\romannumeral1}, we train the target model from scratch with soft pseudo-labels generated by the source model in a knowledge distillation manner.
1 code implementation • CVPR 2023 • Shuai Wang, Daoan Zhang, Zipei Yan, JianGuo Zhang, Rui Li
Test time adaptation (TTA) aims to adapt deep neural networks when receiving out of distribution test domain samples.
1 code implementation • 17 Mar 2023 • Shuai Wang, Zipei Yan, Daoan Zhang, Haining Wei, Zhongsen Li, Rui Li
Specifically, our ProtoKD can not only distillate the pixel-wise knowledge of multi-modality data to single-modality data but also transfer intra-class and inter-class feature variations, such that the student model could learn more robust feature representation from the teacher model and inference with only one single modality data.