Images captured in low-light conditions typically suffer from a variety of inevitable degradations. The enhancement is made more difficult by the unpredictability of the imaging environment. Inadequate illumination is not the only degradation that hides in the darkness; noise and color distortion caused by low-quality cameras are also contributed to it. Although there has been significant advancement in the field of low-light image improvement in recent years, the effort to propose a workable low-light image enhancer remains challenging since it will be flexible in adjusting the darkness and effective in erasing degradations. The primary objective of low-light image enhancement (LLIE) is to enhance an image that was taken in sub-optimal illumination conditions and contains overexposed and under-exposed artifacts. Deep learning strategies have dominated modern advancements in this domain, where numerous learning algorithms, network architectures, loss functions, training data, and so on have been used. These outperform traditional approaches in terms of accuracy, robustness, and performance, and so are getting more attention. Despite of previous works that used redundant architectures to concurrently estimate luminance and reflectance, our proposed architecture follows a straight path to gradually enhance input image on different levels. The research aims to develop a novel end-to-end deep neural network for image enhancement that enables for significant flexibility throughout the quality-efficiency spectrum while also improving image enhancement performance and inferential efficiency. Enhanced image can be effectively used in several different image analyzing applications as well as other applications such as autonomous driving, control systems, and intelligent surveillance where high-quality images or videos are necessary for obtaining accurate results and optimum performance.
We develop a CNN-based exposure fusion framework that can detect and eliminate hidden degradations in the darkness, as well as adjust different lightning conditions. The framework helps in extracting optimized feature representations using denoising, enhancement, and fusion module. Moreover, we perform a variety of ablation studies of low-light enhancement methods as well as comparative analysis of our proposed method with existing method is performed both qualitatively and quantitatively.
Related Publications: