Spring 2013

Sparsifying Transforms for RGB/NIR Images 

It has been shown that sparse-decomposition based algorithms lead to superior results in different applications such as denoising, inpainting, demosaicing and compression. Sparse representations have largely been studied for gray-scale images and few papers addressed the sparsity of color images.
In this project, we consider the RGB and NIR information of a scene as a four channel image (red, blue, green and NIR) and study the sparsity of these images. One of the main challenges in sparse decomposition based algorithms is to find the sparsifying transform for signals of interest. Thus our main goal is to find appropriate de-correlating transforms for RGB/NIR images. To achieve this goal, the first step is to compute the sparsity of these images in predefined transforms like DCT or wavelet. It has been proved that representations of natural images in these transforms are highly sparse. The next step is to use current dictionary learning algorithms to find sparsifying transforms for four channel images (RGB/NIR images). The dictionary learning method is the process of finding a transform domain that represents a specific set of data as sparsely as possible. Finally we will try to develop a new dictionary learning algorithm which fits RGB/NIR images.

References 
[1] M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An Algorithm for Designing Overcomplete
Dictionaries for Sparse Representation,” in IEEE Transactions on Signal Processing,
vol. 54, no. 11, 2006.
[2] J. Mairal, M. Elad, and G. Sapiro, “Sparse Representation for Color Image Restoration,”
in IEEE Transactions on Image Processing, vol. 17, no. 1, 2008.

Type of work: 60% Theory, 40% MATLAB implementation.

Prerequisite: Knowledge about sparse representation and de-correlating transforms and good MATLAB skills.

Level: MS, semester project

Supervisor: Zahra Sedeghipoor ([email protected])

 

Gamut Mapping Using Preferred Hue Shifts

Today, there are many different cameras, printers, projectors, and displays on which images are captured and shown.  These devices may have different limitations in the range of colors that they can display.  In other words, the devices have different gamuts.  For example, a movie projector at a movie theater may have a larger gamut than that of a standard television set at home.  Gamut mapping is the process of reproducing image content for different gamuts.  For example, a movie studio would want the DVD copies of their movie to look similar to the movie that is played in theaters.

 

Gamut mapping methods face a tradeoff between preserving original colors and preserving detail.  When mapping an image from a larger color gamut to a smaller color gamut, color accuracy can be preserved by keeping all in-gamut colors as is and clipping all out-of-gamut colors.  However, this leads to a loss of detail in the clipped regions.  On the other hand, compressing all colors into the smaller gamut while preserving contrasts between them will change the in-gamut colors, thereby reducing color accuracy.  Spatial gamut mapping methods give the best tradeoff between color accuracy and detail because they map colors based on local neighborhoods and have the flexibility of one-to-many mappings.

 

The goal of this project is to create a spatial gamut mapping method that preserves details in accordance to human preference.  To preserve details, some color accuracy will need to be sacrificed, and some colors will need to be changed.  It is probably better to avoid hue shifts, but when the shift is necessary, perhaps there is a particular direction that is better than others.  In fact, Taplin and Johnson [1] found that humans have a preference in hue shifts.  Using their findings, we can create a method that avoids the least preferred hue shifts.

 

The gamut mapping method should also be temporally consistent in order to handle videos.  Since spatial gamut mapping method allows the same source color to map to different target colors depending on local neighborhoods, temporal consistency is a problem we will need to handle.

 

References

[1] L.M. Taplin, G.M. Johnson. 2004. When Good Hues Go Bad. Conference on Color in Graphics, Imaging and Vision, 348-352.

 

Prerequisites:  basic knowledge of human color vision and color science, image processing in Matlab

 

Type of Work:  50% research, 50% implementation

 

Level:  MS semester

 

Supervisor: Cheryl Lau ([email protected])