Grand Challenges on NIR Image Colorization
Results of the Grand Challenges
|Final Ranking||Team Names||Affiliation||Team Members|
|1||NPU_CIAIC||Northwestern Polytechnical University, China;|
Shanghai Shengyao Intelligent Technology Co., Ltd, China;
Blueye Intelligence, Zhenjiang, China
|Longbin Yan, Xiuheng Wang, Min Zhao, Shumin Liu, Jie Chen|
|2||A*StarTrek||Institute of High Performance Computing, A*STAR, Singapore|
Institute for Infocomm Research, A*STAR, Signapore
|Zaifeng Yang, Zhenghua Chen|
|3||Amazing Grace||Xi’An University of Posts & Telecommunications, Xian, China|
Xidian University, Xi'an China
|Tian Sun, Cheolkon Jung|
NIR (Near-Infrared) imaging provides a unique vision with respect to illumination and object material properties which are quite different from those in visible wavelength bands. The high sensitivity of NIR sensors and the fact that it is invisible to human vision make it an indispensable input for applications such as low-light imaging, night vision surveillance and road navigation, etc. However, the monochromatic NIR images lacks color discrimination and severely differ from the well-known RGB spectrum, which makes them unnatural and unfamiliar for both human perception and CV algorithms.
The correlation between the NIR and RGB domains are more ambiguous and complex which makes such task more challenging than gray scale image colorization. In recent years we see numerous learning-based models that explore different network architectures, supervision modes, and training strategies to tackle this challenge, however the limitations are still obvious in terms of color realism and texture fidelity. Semantic correctness and instance-level color consistency are difﬁcult to be preserved. More importantly, the demand for strictly registered NIR-RGB image pairs also restricts efﬁcient development of NIR colorization models.
This challenge call for development of more efficient algorithms to transform NIR images to RGB images. We will provide both paired NIR-RGB image pairs and unpaired RGB images from different scene categories to facilitate explorations of both pixel-level and higher-level features for accurate semantic mapping and vivid color variation.