HairFastGAN: Realistic and Robust Hair Transfer with a Fast Encoder-Based Approach.
Abstract
Our paper addresses the complex task of transferring a hairstylefrom a reference image to an input photo for virtual hair try-on. This taskis challenging due to the need to adapt to various photo poses, the sensitivity of hairstyles, and the lack of objective metrics. The current stateof the art hairstyle transfer methods use an optimization process for different parts of the approach, making them inexcusably slow. At the sametime, faster encoder-based models are of very low quality because theyeither operate in StyleGAN’s W+ space or use other low-dimensionalimage generators. Additionally, both approaches have a problem withhairstyle transfer when the source pose is very different from the targetpose, because they either don’t consider the pose at all or deal with it inefficiently. In our paper, we present the HairFast model, which uniquelysolves these problems and achieves high resolution, near real-time performance, and superior reconstruction compared to optimization problembased methods. Our solution includes a new architecture operating in theFS latent space of StyleGAN, an enhanced inpainting approach, and improved encoders for better alignment, color transfer, and a new encoderfor post-processing. The effectiveness of our approach is demonstratedon realism metrics after random hairstyle transfer and reconstructionwhen the original hairstyle is transferred. In the most difficult scenarioof transferring both shape and color of a hairstyle from different images,our method performs in less than a second on the Nvidia V100. Our codeis available at https://github.com/AIRI-Institute/HairFastGAN.
Similar publications
partnership