Image and Video Processing Lab, The Chinese University of Hong Kong

 

1. Introduction

 

Our objective is to develop a full-reference image quality metric that can accurately simulate human perception of image quality. Such a metric can be used to optimize, evaluate and monitor performances of many image processing systems. Full-reference means that the reference image is accessible, as shown below.

 

 

 

2. Motivation

 

We developed a decoupling algorithm that can separate spatial distortions into two categories: detail losses and additive impairments, as shown below. Detail losses refer to the losses of the useful structural information. Additive impairments refer to the redundant visual noise. The final quality prediction balances the influences from these two types of distortions.

 

 

 

3. Framework

 

Figure below shows the algorithm framework. After decoupling detail losses and additive impairments, we simulate low-level processing of the human visual system, such as contrast sensitivity, contrast masking, etc. Two specific quality measures DLM and AIM are designed to evaluate the influences from the two types of distortions, respectively, and their outputs are nonlinearly combined to generate the overall quality score.

 

 

4. Results

 

Figure below shows the scatter plots of the proposed image quality metric on five subjective image databases: (a) LIVE [2] (b) TID [3] (c) CSIQ [4] (d) IVC [5] (e) TOY [6]. Each dot represents a test image. If the dots scatter closely around the fitted line, it means that the metric prediction correlates well with the human quality perception.

 

 

Table below shows the linear correlation coefficient (LCC) of the proposed metric on five subjective image databases. Higher value means better correlation with the human quality perception. For comparison, LCC values of PSNR (practically used in most image processing applications) are also listed in the table.

 

 

5. Code

 

MATLAB code of the proposed image quality metric can be downloaded here. The version for video can be downloaded here.

 

6. References

 

[1] Songnan Li, Fan Zhang, Lin Ma, King Ngi Ngan, "Image quality assessment by separately evaluating detail losses and additive impairments", IEEE Transaction on Multimedia, Vol. 13, no. 5, pp. 935-949, Oct. 2011.

[2] H. R. Sheikh et al., LIVE Image Quality Assessment Database, Release2, 2005. [Online]. Available: http://live.ece.utexas.edu/research/quality.

[3] N. Ponomarenko, F. Battisti, K. Egiazarian, J. Astola, and V. Lukin, TAMPERE IMAGE DATABASE 2008 TID2008, Version 1.0, 2008. [Online]. Available: http://www.ponomarenko.info/tid2008.htm.

[4] D. M. Chandler, CSIQ Database, 2010. [Online]. Available: http://vision.okstate.edu/csiq/.

[5] P. L. Callet and F. Autrusseau, Subjective Quality Assessment IRCCyN/IVC Database, 2005. [Online]. Available: http://www.irccyn.ec-nantes.fr/ivcdb/.

[6] Y. Horita et al., MICT Image Quality Evaluation Database. [Online]. Available: http://mict.eng.u-toyama.ac.jp/mict/index2.html.