Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more formidable obstacles due to the additional necessity for temporal consistency.Moreover,there is rarely a systematic review of video colorization methods.In this paper,we aim to review existing state-of-the-art video colorization methods.In addition,maintaining spatial-temporal consistency is pivotal to the process of video colorization.To gain deeper insight into the evolution of existing methods in terms of spatial-temporal consistency,we further review video colorization methods from a novel perspective.Video colorization methods can be categorized into four main categories:optical-flow based methods,scribble-based methods,exemplar-based methods,and fully automatic methods.However,optical-flow based methods rely heavily on accurate optical-flow estimation,scribble-based methods require extensive user interaction and modifications,exemplar-based methods face challenges in obtaining suitable reference images,and fully automatic methods often struggle to meet specific colorization requirements.We also discuss the existing challenges and highlight several future research opportunities worth exploring.
Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.
目的 线稿上色是由线条构成的黑白线稿草图涂上颜色变为彩色图像的过程,在卡通动画制作和艺术绘画等领域中是非常关键的步骤。全自动线稿上色方法可以减轻绘制过程中烦琐耗时的手工上色的工作量,然而自动理解线稿中的稀疏线条并选取合适的颜色仍较为困难。方法 依据现实场景中特定绘画类型常有固定用色风格偏好这一先验,本文聚焦于有限色彩空间下的线稿自动上色,通过约束色彩空间,不仅可以降低语义理解的难度,还可以避免不合理的用色。具体地,本文提出一种两阶段线稿自动上色方法。在第1阶段,设计一个灰度图生成器,对输入的稀疏线稿补充线条和细节,以生成稠密像素的灰度图像。在第2阶段,首先设计色彩推理模块,从输入的颜色先验中推理得到适合该线稿的色彩子空间,再提出一种多尺度的渐进融合颜色信息的生成网络以逐步生成高质量的彩色图像。结果 实验在3个数据集上与4种线稿自动上色方法进行对比,在上色结果的客观质量对比中,所提方法取得了更高的峰值信噪比(peak signal to noise ratio,PSNR)和结构相似性(structural similarity index measure,SSIM)值以及更低的均方误差;在上色结果的色彩指标对比中,所提方法取得了最高的色彩丰富度分数;在主观评价和用户调查中,所提方法也取得了与人的主观审美感受更一致的结果。此外,消融实验结果也表明了本文所使用的模型结构及色彩空间限制有益于上色性能的提升。结论 实验结果表明,本文提出的有限色彩空间下的线稿自动上色方法可以有效地完成多类线稿的自动上色,并且可以简单地通过调整颜色先验以获得更多样的彩色图像。