Improvement of noisy images filtered by bilateral process using a multi-scale context aggregation network
DOI:
https://doi.org/10.15587/1729-4061.2022.255789Keywords:
convolutional neural network, residual learning, multi-scale context aggregation, CCTV imagesAbstract
Deep learning has recently received a lot of attention as a feasible solution to a variety of artificial intelligence difficulties. Convolutional neural networks (CNNs) outperform other deep learning architectures in the application of object identification and recognition when compared to other machine learning methods. Speech recognition, pattern analysis, and image identification, all benefit from deep neural networks. When performing image operations on noisy images, such as fog removal or low light enhancement, image processing methods such as filtering or image enhancement are required. The study shows the effect of using Multi-scale deep learning Context Aggregation Network CAN on Bilateral Filtering Approximation (BFA) for de-noising noisy CCTV images. Data-store is used tomanage our dataset, which is an object or collection of data that are huge to enter in memory, it allows to read, manage, and process data located in multiple files as a single entity. The CAN architecture provides integral deep learning layers such as input, convolution, back normalization, and Leaky ReLu layers to construct multi-scale. It is also possible to add custom layers like adaptor normalization (µ) and adaptive normalization (Lambda) to the network. The performance of the developed CAN approximation operator on the bilateral filtering noisy image is proven when improving both the noisy reference image and a CCTV foggy image. The three image evaluation metrics (SSIM, NIQE, and PSNR) evaluate the developed CAN approximation visually and quantitatively when comparing the created de-noised image over the reference image.Compared with the input noisy image, these evaluation metrics for the developed CAN de-noised image were (0.92673/0.76253, 6.18105/12.1865, and 26.786/20.3254) respectively
References
- Kwon, H. (2021). MedicalGuard: U-Net Model Robust against Adversarially Perturbed Images. Security and Communication Networks. doi: https://doi.org/10.1155/2021/5595026
- Zhu, G., Fu, J., Dong, J. (2020). Low Dose Mammography via Deep Learning. Journal of Physics: Conference Series. doi: https://doi.org/10.1088/1742-6596/1626/1/012110
- Liu, H., Wu, J., Lu, W., Onofrey, J. A., Liu, Y.-H., Liu, C. (2020). Noise reduction with cross-tracer and cross-protocol deep transfer learning for low-dose PET. Physics in Medicine & Biology, 65 (18). doi: https://doi.org/10.1088/1361-6560/abae08
- Chen, Q., Xu, J., Koltun, V. (2017). Fast Image Processing with Fully-Convolutional Networks. 2017 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/ICCV.2017.273
- Sharma, S., Tang, B., Ball, J. E., Carruth, D. W., Dabbiru, L. (2020). Recursive multi-scale image deraining with sub-pixel convolution based feature fusion and context aggregation. IEEE Access. doi: https://doi.org/10.1109/ACCESS.2020.3024542
- Kim, J., Kim, J., Jang G.-J., Lee, M. (2017). Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Networks, 87. doi: https://doi.org/10.1016/j.neunet.2016.12.002
- Missert, A. D., Yu, L., Leng, S., Fletcher, J. G., McCollough, C. H. (2020). Synthesizing images from multiple kernels using a deep convolutional neural network. Med Phys, 47 (2). doi: https://doi.org/10.1002/mp.13918
- Klyuzhin, I. S., Cheng, J.-C., Bevington, C., Sossi, V. (2020). Use of a Tracer-Specific Deep Artificial Neural Net to Denoise Dynamic PET Images. IEEE Transactions on Medical Imaging, 39 (2). doi: https://doi.org/10.1109/TMI.2019.2927199
- Zhang, J., Zhao, Y., Wang, J., Chen, B. (2020). FedMEC: Improving Efficiency of Differentially Private Federated Learning via Mobile Edge Computing. Mobile Networks and Applications, 25, 2421–2433. doi: https://doi.org/10.1007/s11036-020-01586-4
- Mehranian, A., Wollenweber, S. D., Walker, M. D., Bradley, K. M., Fielding, P. A., Su, K.-H. et. al. (2022). Image enhancement of whole-body oncology [18F]-FDG PET scans using deep neural networks to reduce noise. European Journal of Nuclear Medicine and Molecular Imaging, 49, 539–549. doi: https://doi.org/10.1007/s00259-021-05478-x
- Lim, H., Chun, I. Y., Dewaraja, Y. K., Fessler, J. A. (2020). Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network. IEEE Transactions on Medical Imaging, 39 (11,) 3512–3522. doi: https://doi.org/10.1109/TMI.2020.2998480
- Deeba, F., Zhou, Y., Dharejo, F. A., Du, Y., Wang, X., Kun, S. (2021). Multi-scale Single Image Super-Resolution with Remote-Sensing Application Using Transferred Wide Residual Network. Wireless Personal Communications, 120, 323–342. doi: https://doi.org/10.1007/s11277-021-08460-w
- Kromrey, M.-L., Tamada, D., Johno, H., Funayama, S., Nagata, N., Ichikawa, S. et. al. (2020). Reduction of respiratory motion artifacts in gadoxetate-enhanced MR with a deep learning–based filter using convolutional neural network. European Radiology, 30, 5923–5932. doi: https://doi.org/10.1007/s00330-020-07006-1
- Grabowski, D., Czyżewski, A. (2020). System for monitoring road slippery based on CCTV cameras and convolutional neural networks. Journal of Intelligent Information Systems, 55, 521–534. doi: https://doi.org/10.1007/S10844-020-00618-5
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Zinah R. Hussein
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.