Abstract

Deep neural network-based image compression has been extensively studied. However, the model robustness which is crucial to practical application is largely overlooked. We propose to examine the robustness of prevailing learned image compression models by injecting negligible adversarial perturbation into the original source image. Severe distortion in decoded reconstruction reveals the general vulnerability in existing methods regardless of compression settings (e.g., network architecture, loss function, quality scale). We then explore possible defense strategies against the adversarial attack to improve the model robustness, including geometric self-ensemble based pre-processesing, and adversarial training. Experiments report the effectiveness of various defense strategies. Additional image recompression case study further confirms the substantial improvement of the robustness of compression models in real-life applications. Overall, our methodology is simple, effective, and generalizable, making it attractive for developing robust learned image compression solutions.

Code & Paper

            @ARTICLE{chen2021robust,
                author={Chen, Tong and Ma, Zhan},
                journal={IEEE Transactions on Circuits and Systems for Video Technology}, 
                title={Towards Robust Neural Image Compression: Adversarial Attack and Model Finetuning}, 
                year={2023},
                volume={},
                number={},
                pages={1-1},
                doi={10.1109/TCSVT.2023.3276442}}