Example-Based Video Color Grading

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2013, to appear)

Nicolas Bonneel, Kalyan Sunkavalli, Sylvain Paris, Hanspeter Pfister

Full resolution PDF (37.1 MB)
Low resolution PDF (1.9 MB)
Supplemental PDF describing the curvature computation (203 KB)
Divx demo - 54 Mb

bibtex Bibtex
C++ code for computing our curvature in the Wasserstein space (304 KB)
Additional results (278 MB)

 

Copyright by the authors, 2013. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Graphics, http://dx.doi.org/10.1145/2461912.2461939

(a) Input segmented video (input) (b) Target production quality segmented video (model)
"Transformers" - © Paramount Pictures
(c) Our result
Color grading is the process of adjusting the color and tonal balance of a movie to achieve a specific look. This is a critical step of the movie editing pipeline. However, even with dedicated software, it remains a painstaking task that can be done only by skilled artists. We propose a new model-based approach that automatically transfers the look of a professionally edited sequence to another video. To produce sophisticated effects like the contrasted orange-teal look of this example, we use a user-provided foreground-background segmentation. This allows us to process the input sequence (a) to reproduce the characteristic visual style of the Transformers movie (b) to convey a similar tense mood (c). Our approach produces results that are free from artifacts and temporally coherent, as can be seen in the companion video.

 

In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method successfully transfer color palettes between videos for a range of visual styles and a number of input video clips.



We thank Harvard Professor Siu-Cheong Lau for useful advice on curvature computation. We also thank PavelM/5pm, former contributor on Math.StackExchange, for detailed clarifications on Takatsu's paper. We also thank the SIGGRAPH reviewers for their helpful comments. This work was partially supported by NSF CGV-1111415.

Our paper and supplemental videos feature Creative Commons materials attributed to:
We'd also like to acknowledge the following people who gave their authorization to use their great video clips: