DreamMakeup: Face Makeup Customization using Latent Diffusion Models

Abstract

The exponential growth of the global makeup market has paralleled advancements in virtual makeup simulation technology. Despite the notable progress facilitated by generative adversarial networks (GANs), their application in facial makeup simulation encounters significant challenges, including training instability and limited customization capabilities. Addressing these challenges, this paper introduces DreamMakup - a novel Diffusion Model for Makeup Customization, leveraging the inherent advantages of diffusion models for superior controllability and precise real-image editing. DreamMakeup employs early-stopped DDIM inversion to preserve the facial structure and identity while enabling extensive customization through various conditioning inputs such as reference images, specific RGB colors, and textual descriptions. Our model demonstrates notable improvements over existing GAN-based frameworks, enhanced customization, color-matching capabilities, and compatibility with textual descriptions, with affordable computational costs.