Search Results for author: Zhiyang Dou

Found 12 papers, 4 papers with code

AutoCV: Empowering Reasoning with Automated Process Labeling via Confidence Variation

1 code implementation27 May 2024 Jianqiao Lu, Zhiyang Dou, Hongru Wang, Zeyu Cao, Jianbo Dai, Yingjia Wan, Yinya Huang, Zhijiang Guo

We experimentally validate that the confidence variations learned by the verification model trained on the final answer correctness can effectively identify errors in the reasoning steps.

Part123: Part-aware 3D Reconstruction from a Single-view Image

no code implementations27 May 2024 Anran Liu, Cheng Lin, YuAn Liu, Xiaoxiao Long, Zhiyang Dou, Hao-Xiang Guo, Ping Luo, Wenping Wang

However, all the existing methods represent the target object as a closed mesh devoid of any structural information, thus neglecting the part-based structure, which is crucial for many downstream applications, of the reconstructed shape.

LaserHuman: Language-guided Scene-aware Human Motion Generation in Free Environment

1 code implementation20 Mar 2024 Peishan Cong, Ziyi Wang, Zhiyang Dou, Yiming Ren, Wei Yin, Kai Cheng, Yujing Sun, Xiaoxiao Long, Xinge Zhu, Yuexin Ma

Language-guided scene-aware human motion generation has great significance for entertainment and robotics.

Disentangled Clothed Avatar Generation from Text Descriptions

no code implementations8 Dec 2023 Jionghao Wang, YuAn Liu, Zhiyang Dou, Zhengming Yu, Yongqing Liang, Xin Li, Wenping Wang, Rong Xie, Li Song

In this paper, we introduced a novel text-to-avatar generation method that separately generates the human body and the clothes and allows high-quality animation on the generated avatar.

Virtual Try-on

Boosting Segment Anything Model Towards Open-Vocabulary Learning

1 code implementation6 Dec 2023 Xumeng Han, Longhui Wei, Xuehui Yu, Zhiyang Dou, Xin He, Kuiran Wang, Zhenjun Han, Qi Tian

The recent Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model, showcasing potent zero-shot generalization and flexible prompting.

Object Object Localization +2

TLControl: Trajectory and Language Control for Human Motion Synthesis

no code implementations28 Nov 2023 Weilin Wan, Zhiyang Dou, Taku Komura, Wenping Wang, Dinesh Jayaraman, Lingjie Liu

Controllable human motion synthesis is essential for applications in AR/VR, gaming, movies, and embodied AI.

Motion Synthesis

Wonder3D: Single Image to 3D using Cross-Domain Diffusion

1 code implementation23 Oct 2023 Xiaoxiao Long, Yuan-Chen Guo, Cheng Lin, YuAn Liu, Zhiyang Dou, Lingjie Liu, Yuexin Ma, Song-Hai Zhang, Marc Habermann, Christian Theobalt, Wenping Wang

In this work, we introduce Wonder3D, a novel method for efficiently generating high-fidelity textured meshes from single-view images. Recent methods based on Score Distillation Sampling (SDS) have shown the potential to recover 3D geometry from 2D diffusion priors, but they typically suffer from time-consuming per-shape optimization and inconsistent geometry.

Image to 3D

C$\cdot$ASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

no code implementations20 Sep 2023 Zhiyang Dou, Xuelin Chen, Qingnan Fan, Taku Komura, Wenping Wang

We present C$\cdot$ASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters.

Imitation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.