Restoring and enhancing underwater images is a significant issue in image processing and computer
vision. Poor underwater imaging quality is caused by the scattering and absorption of light by underwater
contaminants. Images taken underwater frequently suffer from quality issues, such as low contrast, poor
sight (due to the absorption of natural light), blurred details, changing colours, additive noise and blurred
effect, uneven illumination, etc.
The study of underwater image analysis has gained a lot of attention and achieved substantial
advancements during the past few decades. The current techniques can broaden the application of
underwater photography while improving image contrast and resolution. Traditional image enhancement
techniques have some drawbacks when applied directly to underwater optical environments; hence, some
specific algorithms, such as histogram-based, retinex-based, and picture fusion-based algorithms, are
proposed. Deep learning has recently shown a strong potential for creating results that are satisfying and
have the right colours and details, but these methods significantly increase the size of the image processing
inference models and therefore cannot be applied or deployed directly to the edge devices.
Recently, Vision Transformers (ViT)-based architectures are producing incredible results. In recent years,
there has been more interest in transformers. Their interactions between image content and attention
weights can be thought of as a convolution that changes in space, and their self-attention mechanism is
good at simulating long-distance dependencies and global features.
The suggested approach is a pipeline based on context-aware light-weight transformers with the goal of
improving image quality without sacrificing the naturalness of the image, as well as reducing the inference
time and size of the model. In this study, we trained a deep network-based transformer model on two
standard datasets, i.e., Large-Scale Underwater Image (LSUI) and Underwater Image Enhancement
Benchmark Dataset (UIEB), so that the network becomes more generalized, which subsequently improved
the performance. Our real-time underwater image enhancement system shows superior results on edge
devices. Also, we provide a comparison with other transformer-based methods. Overall findings indicate
that the suggested method has produced underwater images of higher quality than the original input
underwater images, which had a high noise ratio and more colour disruption.
Related Publication: Submitted in Multimedia Tools and Applications