Robust Manga Page Colorization via Coloring Latent Space
Abstract
Manga (Japanese comics) are commonly drawn with black ink on paper. Colorization of manga pages can enrich the visual content and provide a better reading experience. However, the existing colorization approaches are not sufficiently robust.
In this paper, we propose a two-stage approach for manga page colorization that supports sampling and color modification with color hints. In the first step, we employ the Pixel2Style2Pixel architecture to map the black-and-white manga image into the latent space of StyleGAN pretrained on the highly blurred colored manga images that we call Coloring Latent Space. The latent vector is automatically or manually modified and fed into the StyleGAN synthesis network to generate a coloring draft that sets the overall color distribution for the image. In the second step, heavy Pix2Pix-like conditional GAN fuses the information from the coloring draft and user-defined color hints and generates the final high-quality coloring. Our method partially overcomes the multimodality of the considered problem and generates diverse but consistent colorings without user input. The visual comparison, the quantitative evaluation with Frechet Inception Distance, and the qualitative evaluation via Mean Opinion Score exhibit the superiority of our approach over the existing state-of-the-art manga pages colorization method.
Similar publications
partnership