Development of an image segmentation model based on a convolutional neural network
DOI:
https://doi.org/10.15587/1729-4061.2021.228644Keywords:
image processing, image segmentation, convolutional neural networks, unmanned aerial vehicleAbstract
This paper has considered a model of image segmentation using convolutional neural networks and studied the process efficiency based on models involving training the deep layers of convolutional neural networks. There are objective difficulties associated with determining the optimal characteristics of neural networks, so there is an issue related to retraining the neural network. Eliminating retraining by determining the optimal number of epochs only would not suffice since it does not provide high accuracy.
The requirements for the set of images for training and model verification were defined. These requirements are best met by the image sets PASCAL VOC (United Kingdom) and NVIDIA-Aerial Drone (USA).
It has been established that AlexNet (Canada) is a trained model and could perform image segmentation while object recognition reliability is insufficient. Therefore, there is a need to improve the efficiency of image segmentation. It is advisable to use the AlexNet architecture to build a specialized model, which, by changing the parameters and retraining some layers, would allow for a better process of image segmentation.
Five models have been trained using the following parameters: learning speed, the number of epochs, optimization algorithm, the type of learning speed change, a gamma coefficient, a pre-trained model.
A convolutional neural network has been developed to improve the accuracy and efficiency of image segmentation. Optimal neural network training parameters have been determined: learning speed is 0.0001, the number of epochs is 50, a gamma coefficient is 0.1, etc. An increase in accuracy by 3 % was achieved, which makes it possible to assert the correctness of the choice of the architecture for the developed network and the selection of parameters. That allows this network to be used for practical tasks related to image segmentation, in particular for devices with limited computing resources
References
- Bilinskiy, Y. Y., Knysh, B. P., Kulyk, Y. А. (2017). Quality estimation methodology of filter performance for suppression noise in the mathcad package. Herald of Khmelnytskyi national university, 3, 125–130. Available at: https://ir.lib.vntu.edu.ua/bitstream/handle/123456789/23238/47857.pdf?sequence=2&isAllowed=y
- Kurugollu, F., Sankur, B., Harmanci, A. E. (2001). Color image segmentation using histogram multithresholding and fusion. Image and Vision Computing, 19 (13), 915–928. doi: https://doi.org/10.1016/s0262-8856(01)00052-x
- Wang, H., Oliensis, J. (2010). Generalizing edge detection to contour detection for image segmentation. Computer Vision and Image Understanding, 114 (7), 731–744. doi: https://doi.org/10.1016/j.cviu.2010.02.001
- Felzenszwalb, P. F., Huttenlocher, D. P. (2004). Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59 (2), 167–181. doi: https://doi.org/10.1023/b:visi.0000022288.19776.77
- Chitade, A. Z., Katiyar, S. K. (2010). Colour based image segmentation using k-means clustering. International Journal of Engineering Science and Technology, 2 (10), 5319–5325. Available at: https://www.researchgate.net/publication/50361273_Color_based_image_segmentation_using_K-means_clustering
- Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Clarendon Press, 482. Available at: http://people.sabanciuniv.edu/berrin/cs512/lectures/Book-Bishop-Neural%20Networks%20for%20Pattern%20Recognition.pdf
- Chaudhuri, D., Agrawal, A. (2010). Split-and-merge Procedure for Image Segmentation using Bimodality Detection Approach. Defence Science Journal, 60 (3), 290–301. doi: https://doi.org/10.14429/dsj.60.356
- Keuchel, J., Schnorr, C. (2003). Efficient Graph Cuts for Unsupervised Image Segmentation using Probabilistic Sampling and SVD-based Approximation. University of Mannheim. Available at: https://madoc.bib.uni-mannheim.de/1805/1/2003_9.pdf
- Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. (2015). Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv.org. Available at: https://arxiv.org/pdf/1412.7062.pdf
- Krizhevsky, A., Sutskever, I., Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NIPS'12: Proceedings of the 25th International Conference on Neural Information Processing Systems, 1097–1105. Available at: https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
- Sytnyk, V. F. (2004). Systemy pidtrymky pryiniattia rishen. Kyiv: KNEU, 614. Available at: http://kist.ntu.edu.ua/textPhD/sppr1.pdf
- Shelhamer, E., Long, J., Darrell, T. (2017). Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39 (4), 640–651. doi: https://doi.org/10.1109/tpami.2016.2572683
- Semantic Segmentation with SegNet. Available at: https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-dataset.md
- Saxena, S. (2021). Introduction to The Architecture of Alexnet. Analytics Vidhya. Available at: https://www.analyticsvidhya.com/blog/2021/03/introduction-to-the-architecture-of-alexnet/
- How to do Semantic Segmentation using Deep learning. Available at: https://nanonets.com/blog/how-to-do-semantic-segmentation-using-deep-learning
- Kingma, D. P., Ba, J. (2015). Adam: a method for stochastic optimization. arXiv.org. Available at: https://arxiv.org/pdf/1412.6980.pdf
- Solovey, D. (2017). Terrain Classification from High Resolution Aerial. Images Using Deep Learning. Available at: https://fenix.tecnico.ulisboa.pt/downloadFile/1689244997257919/Resumo_174132.pdf
- Audebert, N., Le Saux, B., Lefèvre, S. (2017). Segment-before-Detect: Vehicle Detection and Classification through Semantic Segmentation of Aerial Images. Remote Sensing, 9 (4), 368. doi: https://doi.org/10.3390/rs9040368
- Aeroscapes. Aerial Semantic Segmentation Benchmark. Available at: https://github.com/ishann/aeroscapes
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Богдан Петрович Кныш, Ярослав Анатольевич Кулик
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.