Low-Light Color Imaging Via Cross-Camera Synthesis
Peiyao Guo1     M. Salman Asif2     Zhan Ma1    
1Nanjing University     2University of California at Riverside

Code [PyTorch]     Paper     Supplementary [PDF]    


Abstract

This paper presents a framework for low-light color imaging using a dual camera system that combines a high spatial resolution monochromatic (HSR-mono) image and a low spatial resolution color (LSR-color) image. We propose a cross-camera synthesis (CCS) module to learn and transfer illumination, color, and resolution attributes across paired HSR-mono and LSR-color images to recover brightness- and color-adjusted high spatial resolution color (HSR-color) images at both camera views. Jointly characterizing various attributes for final synthesis is extremely challenging because of significant domain gaps across cameras. %final representation The proposed CCS method consists of three subtasks: reference-based illumination enhancement (RefIE), reference-based appearance transfer (RefAT), and reference-based super resolution (RefSR), by which we can characterize, transfer, and enhance illumination, color, and resolution at both views. Each subtask is implemented using deep neural networks (DNNs) that are first trained for each subtask separately and then fine-tuned jointly. Experimental results suggest the superior qualitative and quantitative results of the proposed CCS model on both synthetic content from popular datasets and real-captured scenes. Ablation studies further evidence the model generalization to various exposures and camera baselines.


Method


1) RefIE: Reference-based Illumination Enhancement

2) RefAT: Reference-based Appearance Transfer

3) RefSR: Reference-based Super Resolution

Network architecture of the proposed Cross-Camera Synthesized framework (a cascaded pipeline of RefIE, RefAT and RefSR) for a dual image pair with under-exposed LSR color and well-exposed HSR monochrome image.

Experimental Results

1. Quantitative Results on Simulated Dataset (MiddleBury 2014 exp6-exp3)

2. Visual comparison on Real-captured dataset.



Code and Dataset

Code and models would be updated soon.
Synthetic dataset and real-captured scenes would be updated soon.

Webpage template borrowed from Split-Brain Autoencoders, CVPR 2017.