HDR CFA Image Rendering

Author:

Daniel Tamburrino

Abstract:

The goal of tone mapping is the creation of images that provoke the same sensation that a viewer would have when looking at a real scene. In other words, it renders images that perceptually match reality. Tone mapping is particularly important for compression of high dynamic range (HDR) images. The dynamic range is defined as the ratio between the brightest and the darkest part of the scene. As displays and printers have a limited dynamic range, HDR scenes require compression before display.

In a traditional digital photography workflow, tone mapping and other rendering steps are applied after demosaicing. We propose a new workflow inspired by a model of the retinal processing. First, we apply the tone mapping method for HDR images directly on the color filter array (CFA) image, instead of the already demosaiced image. Second, we apply the other rendering steps, such as white balancing and color encoding. Demosaicing is the last step of our workflow.

In our tone mapping algorithm, we simulate the retinal adaptive non-linear processing before interpolation, as it occurs in the retina. We compute a weighted average of surrounding pixel values plus a global factor that depend on the key of the image. We use this value as a variable in the Naka-Rushton equation, which models the photoreceptor non-linearity [1-3].

This new image rendering workflow is very simple and fast because only one third of operations are needed. Furthermore, we have pleasing results on various images with different scene content, key, and dynamic range.

 

Figure 1: Top (a): Traditional image processing work-flow. Center (b): Our proposed work-flow. Bottom left (c): Image rendered by a global tone mapping (gamma). Bottom right (d): Image rendered by our method. Courtesy of Laurence Meylan

To further improve our results, we will next add white balancing and color correction to our workflow, before demosaicing.

Collaborations:

Sabine Süsstrunk (IVRG, EPFL), Laurence Meylan (GE), David Alleysson (UPMF).

Major Publications :

[1] D. Tamburrino, D. Alleysson, L. Meylan, and S. Süsstrunk, Digital Camera Workflow for High Dynamic Range Images Using a Model of Retinal Processing, Proc. IS&T/SPIE Electronic Imaging: Digital Photography IV, vol. 6817, 2008.
[detailed record] [bibtex]

supplementary material and code: http://ivrgwww.epfl.ch/supplementary_material/DT_EI08/index.html

[2] L. Meylan, D. Alleysson, and S. Süsstrunk, A Model of Retinal Local Adaptation for the Tone Mapping of Color Filter Array Images, Journal of the Optical Society of America A (JOSA), Vol. 24, Nr. 3, pp. 2807-2816, 2007.
[detailed record] [bibtex]

supplementary material and code: http://ivrgwww.epfl.ch/supplementary_material/LM_JOSA06/index.html

[3] D. Alleysson, S. Süsstrunk, and L. Meylan, HDR CFA Image Rendering, Proc. EURASIP 14th European Signal Processing Conference, 2006.
[detailed record] [bibtex]

supplementary material: http://ivrgwww.epfl.ch/supplementary_material/AlleyssonMS06/index.html

 

Project Period :

2006-2008.

Funding source :

SNF, under grant number 200021-113829.