3D Generation

77 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find 3D Generation models and implementations

Most implemented papers

Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting

WU-CVGL/MVControl-threestudio 15 Mar 2024

Building upon our MVControl architecture, we employ a unique hybrid diffusion guidance method to direct the optimization process.

DreamView: Injecting View-specific Text Guidance into Text-to-3D Generation

isee-laboratory/dreamview 9 Apr 2024

Text-to-3D generation, which synthesizes 3D assets according to an overall text description, has significantly progressed.

RealPoint3D: Point Cloud Generation from a Single Image with Complex Background

Yan-Xia/RealPoint3D 8 Sep 2018

Then, the image together with the retrieved shape model is fed into the proposed network to generate the fine-grained 3D point cloud.

Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets for 3D Generation, Reconstruction and Classification

fei960922/GPointNet CVPR 2021

We propose a generative model of unordered point sets, such as point clouds, in the form of an energy-based model, where the energy function is parameterized by an input-permutation-invariant bottom-up neural network.

3D Pose Transfer with Correspondence Learning and Mesh Refinement

chaoyuesong/3d-corenet NeurIPS 2021

It aims to transfer the pose of a source mesh to a target mesh and keep the identity (e. g., body shape) of the target mesh.

SDF-StyleGAN: Implicit SDF-Based StyleGAN for 3D Shape Generation

zhengxinyang/sdf-stylegan 24 Jun 2022

We further complement the evaluation metrics of 3D generative models with the shading-image-based Fr\'echet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes.

Neural Wavelet-domain Diffusion for 3D Shape Generation

edward1997104/Wavelet-Generation 19 Sep 2022

This paper presents a new approach for 3D shape generation, enabling direct generative modeling on a continuous implicit representation in wavelet domain.

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation

anciukevicius/renderdiffusion CVPR 2023

In this paper, we present RenderDiffusion, the first diffusion model for 3D generation and inference, trained using only monocular 2D supervision.

Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation

pals-ttic/sjc CVPR 2023

We propose to apply chain rule on the learned gradients, and back-propagate the score of a diffusion model through the Jacobian of a differentiable renderer, which we instantiate to be a voxel radiance field.

NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors

cvlab-columbia/zero123 CVPR 2023

Formulating single-view reconstruction as an image-conditioned 3D generation problem, we optimize the NeRF representations by minimizing a diffusion loss on its arbitrary view renderings with a pretrained image diffusion model under the input-view constraint.