Text-based Image Editing
22 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Text-based Image Editing models and implementationsMost implemented papers
LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images
We propose an automated algorithm to stress-test a trained visual model by generating language-guided counterfactual test images (LANCE).
ManiCLIP: Multi-Attribute Face Manipulation from Text
In this paper we present a novel multi-attribute face manipulation method based on textual descriptions.
HIVE: Harnessing Human Feedback for Instructional Visual Editing
Incorporating human feedback has been shown to be crucial to align text generated by large language models to human preferences.
SVDiff: Compact Parameter Space for Diffusion Fine-Tuning
Diffusion models have achieved remarkable success in text-to-image generation, enabling the creation of high-quality images from text prompts or other modalities.
StyleDiffusion: Prompt-Embedding Inversion for Text-Based Editing
A significant research effort is focused on exploiting the amazing capacities of pretrained diffusion models for the editing of images.
Controlling Geometric Abstraction and Texture for Artistic Images
We present a novel method for the interactive control of geometric abstraction and texture in artistic images.
Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing
Large-scale text-to-image generative models have been a ground-breaking development in generative AI, with diffusion models showing their astounding ability to synthesize convincing images following an input text prompt.
Direct Inversion: Boosting Diffusion-based Editing with 3 Lines of Code
Specifically, in the context of diffusion-based editing, where a source image is edited according to a target prompt, the process commences by acquiring a noisy latent vector corresponding to the source image via the diffusion model.
Inversion-Free Image Editing with Natural Language
We show that when the initial sample is known, a special variance schedule reduces the denoising step to the same form as the multi-step consistency sampling.
SpecRef: A Fast Training-free Baseline of Specific Reference-Condition Real Image Editing
To increase user freedom, we propose a new task called Specific Reference Condition Real Image Editing, which allows user to provide a reference image to further control the outcome, such as replacing an object with a particular one.