Fully-Automatic Inverse Tone Mapping Algorithm Based On Dynamic Mid-Level Mapping - Supplementary material
Results obtained by HDR-VDP-2.2[1]
High Dynamic Range (HDR) displays can show images with higher color contrast levels and peak luminosities than the common Low Dynamic Range (LDR) displays. However, most existing video content is recorded and/or graded in LDR format. To show LDR content on HDR displays, it needs to be up-scaled using a so-called inverse tone mapping algorithm. Several techniques for inverse tone mapping have been proposed in the last years, going from simple approaches based on global and local operators to more advanced algorithms such as neural networks. Some of the drawbacks of existing techniques for inverse tone mapping are the need for human intervention, the high computation time for more advanced algorithms, limited low peak brightness, and the lack of the preservation of the artistic intentions. In this paper, we propose a fully-automatic inverse tone mapping operator based on mid-level mapping capable of real-time video processing. Our proposed algorithm allows expanding LDR images into HDR images with peak brightness over 1000 nits, preserving the artistic intentions inherent to the HDR domain. We assessed our results using the full-reference objective quality metrics HDR-VDP-2.2 and DRIM, and carrying out a subjective pair-wise comparison experiment. We compared our results with those obtained with the most recent methods found in the literature. Experimental results demonstrate that our proposed method outperforms the current state of the art of simple inverse tone mapping methods and its performance is similar to other more complex and time-consuming advanced techniques.
Below you will find some of the Probability of Detection Maps (PMap) obtained by HDR-VDP-2.2 for each method that was evaluated: HQRTM (Kovaleski and Oliveira[2], PITMRR (Huo, Y., Yang, F., Dong, L., and Brost, V.[3]), DREIS (Masia et al.[4]), TELSA (Bist et al.[5]), DRTMO (Endo, Y., Kanamori, Y., and Mitani, J.[6]), HDRCNN (Eilertsen, G., Kronander, J., Denes, G., Mantiuk, R. K., and Unger, J.[7]), and Proposed. More information about this metric on the original paper. Click on the image to enlarge it.