Search Results for author: Jianhong Bai

Found 7 papers, 4 papers with code

InstructAvatar: Text-Guided Emotion and Motion Control for Avatar Generation

1 code implementation24 May 2024 Yuchi Wang, Junliang Guo, Jianhong Bai, Runyi Yu, Tianyu He, Xu Tan, Xu sun, Jiang Bian

Recent talking avatar generation models have made strides in achieving realistic and accurate lip synchronization with the audio, but often fall short in controlling and conveying detailed expressions and emotions of the avatar, making the generated video less vivid and controllable.

UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing

no code implementations20 Feb 2024 Jianhong Bai, Tianyu He, Yuchi Wang, Junliang Guo, Haoji Hu, Zuozhu Liu, Jiang Bian

Recent advances in text-guided video editing have showcased promising results in appearance editing (e. g., stylization).

Video Editing

Towards Distribution-Agnostic Generalized Category Discovery

1 code implementation NeurIPS 2023 Jianhong Bai, Zuozhu Liu, Hualiang Wang, Ruizhe Chen, Lianrui Mu, Xiaomeng Li, Joey Tianyi Zhou, Yang Feng, Jian Wu, Haoji Hu

In this paper, we formally define a more realistic task as distribution-agnostic generalized category discovery (DA-GCD): generating fine-grained predictions for both close- and open-set classes in a long-tailed open-world setting.

Contrastive Learning Transfer Learning

On the Effectiveness of Out-of-Distribution Data in Self-Supervised Long-Tail Learning

2 code implementations8 Jun 2023 Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, Yang Feng, Huanpeng Chu, Haoji Hu

Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect.

Long-tail Learning Representation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.