A Comparative Evaluation of Deep Learning Paradigms for Low-Light Image Enhancement: From CNNs to Diffusion Models
DOI:
https://doi.org/10.63575/CIA.2025.30206Keywords:
low-light image enhancement, deep learning, comparative evaluation, image restorationAbstract
Low-light image enhancement (LLIE) has attracted extensive research interest, with approaches spanning convolutional neural networks (CNNs), Retinex-based deep architectures, zero-shot learning, generative adversarial networks (GANs), Transformers, and diffusion models. Despite the proliferation of individual methods, a unified cross-paradigm evaluation under consistent experimental conditions remains absent in the existing literature. This paper presents a systematic comparative study of twelve representative LLIE methods drawn from six distinct algorithmic paradigms. All methods are evaluated on three widely adopted benchmark datasets—LOL-v1, LOL-v2-Real, and SID—using both full-reference metrics (PSNR, SSIM, LPIPS) and no-reference metrics (NIQE, BRISQUE), alongside computational efficiency analysis covering parameter count, floating-point operations, and inference latency. The experimental results indicate that Transformer-based approaches achieve a favorable balance between reconstruction fidelity and perceptual quality, while zero-shot methods offer substantial advantages in inference speed at the cost of quantitative performance. Diffusion-based methods produce perceptually compelling outputs but incur considerable computational overhead. Cross-dataset generalization tests further expose performance degradation across all paradigms when trained and tested on mismatched data distributions. These findings provide practical guidance for selecting LLIE methods under different deployment constraints and evaluation priorities.


