Development of a method for structural optimization of a neural network based on the criterion of resource utilization efficiency
DOI:
https://doi.org/10.15587/1729-4061.2019.164591Keywords:
artificial neural network, structure optimization, approximation of functions, efficiency criterionAbstract
At present, mathematical models in the form of artificial neural networks (ANNs) are widely used to solve problems on approximation. Application of this technology involves a two-stage approach that implies determining the structure for a model of ANN and the implementation of its training. Completion of the learning process makes it possible to derive a result of the approximation whose accuracy is defined by the complexity of ANN structure. In other words, increasing the ANN complexity allows obtaining a more precise result of training.
In this case, obtaining the model of ANN that implements approximation at the assigned accuracy is defined as the process of optimization.
However, an increase in the ANN complexity leads not only to the improved accuracy, but prolongs the time of computation as well.
Thus, the indicator «assigned accuracy» cannot be used in the problems on determining the optimum neural network architecture. This relates to that the result of the model structure selection and the process of its training, based on the required accuracy of approximation, might be obtained over a period of time unacceptable for the user.
To solve the task on structural identification of a neural network, the approach is used in which the model’s configuration is determined based on a criterion of efficiency. The process of implementation of the constructed method implies adjusting a time factor related to solving the problem and the accuracy of approximation.
The proposed approach makes it possible to substantiate the principle of choosing the structure and parameters of a neural network based on the maximum value for the indicator of effective use of resourcesReferences
- Gorban', A. N. (1998). Generalized approximation theorem and computational capabilities of neural networks. Siberian J. of Numer. Mathematics, 1 (1), 11–24.
- Nelles, O. (2001). Nonlinear System Identification. From Classical Approaches to Neural Networks and Fuzzy Models. Springer, 785. doi: https://doi.org/10.1007/978-3-662-04323-3
- Diniz, P. S. R. (2008). Adaptive Filtering: Algorithms and Practical Implementation. Springer. doi: https://doi.org/10.1007/978-0-387-68606-6
- Mykhailenko, O. (2015). Research of adaptive algorithms of laguerre model parametrical identification at approximation of ore breaking process dynamics. Metallurgical and Mining Industry, 6, 109–117.
- Mykhailenko, O. (2015). Ore Crushing Process Dynamics Modeling using the Laguerre Model. Eastern-European Journal of Enterprise Technologies, 4 (4 (76)), 30–35. doi: https://doi.org/10.15587/1729-4061.2015.47318
- Haykin, S. (2009). Neural Networks and Learning Machines. Pearson, 938.
- Yang, J., Ma, J., Berryman, M., Perez, P. (2014). A structure optimization algorithm of neural networks for large-scale data sets. 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). doi: https://doi.org/10.1109/fuzz-ieee.2014.6891662
- Han, S., Pool, J., Tran, J., Dally, W. (2015). Learning both Weights and Connections for Efficient Neural Network. Proceedings of Advances in Neural Information Processing Systems.
- Liu, C., Zhang, Z., Wang, D. (2014). Pruning deep neural networks by optimal brain damage. INTERSPEECH 2014, 1092–1095.
- Tresp, V., Neuneier, R., Zimmermann, H. G. (1996). Early Brain Damage. Proceedings of the 9th International Conference on Neural Information Processing Systems NIPS96, 669–675.
- Christiansen, N. H., Job, J. H., Klyver, K., Hogsbrg, J. (2012). Optimal Brain Surgeon on Artificial Neural Networks in Nonlinear Structural Dynamics. In Proceedings of 25th Nordic Seminar on Computational Mechanics.
- Babaeizadeh, M., Smaragdis, P., Campbell, R. H. (2016). NoiseOut: A Simple Way to Prune Neural Networks. Proceedings of 29th Conference on Neural Information Processing Systems (NIPS 2016). Barcelona.
- He, T., Fan, Y., Qian, Y., Tan, T., Yu, K. (2014). Reshaping deep neural network for fast decoding by node-pruning. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). doi: https://doi.org/10.1109/icassp.2014.6853595
- Takeda, R., Nakadai, K., Komatani, K. (2017). Node Pruning Based on Entropy of Weights and Node Activity for Small-Footprint Acoustic Model Based on Deep Neural Networks. Interspeech 2017, 1636–1640. doi: https://doi.org/10.21437/interspeech.2017-779
- Islam, M., Sattar, A., Amin, F., Yao, X., Murase, K. (2009). A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39 (3), 705–722. doi: https://doi.org/10.1109/tsmcb.2008.2008724
- Arifovic, J., Gençay, R. (2001). Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A: Statistical Mechanics and its Applications, 289 (3-4), 574–594. doi: https://doi.org/10.1016/s0378-4371(00)00479-9
- Fiszelew, A., Britos, P., Ochoa, A., Merlino, H., Fernández, E., García-Martínez, R. (2007). Finding Optimal Neural Network Architecture using Genetic Algorithms. Advances in Computer Science and Engineering Research in Computing Science, 27, 15–24.
- Yang, S.-H., Chen, Y.-P. (2012). An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications. Neurocomputing, 86, 140–149. doi: https://doi.org/10.1016/j.neucom.2012.01.024
- Lutsenko, I. (2016). Definition of efficiency indicator and study of its main function as an optimization criterion. Eastern-European Journal of Enterprise Technologies, 6 (2 (84)), 24–32. doi: https://doi.org/10.15587/1729-4061.2016.85453
- Lutsenko, I., Fomovskaya, E., Oksanych, I., Koval, S., Serdiuk, O. (2017). Development of a verification method of estimated indicators for their use as an optimization criterion. Eastern-European Journal of Enterprise Technologies, 2 (4 (86)), 17–23. doi: https://doi.org/10.15587/1729-4061.2017.95914
- Lutsenko, I., Fomovskaya, O., Vihrova, E., Serdiuk, O., Fomovsky, F. (2018). Development of test operations with different duration in order to improve verification quality of effectiveness formula. Eastern-European Journal of Enterprise Technologies, 1 (4 (91)), 42–49. doi: https://doi.org/10.15587/1729-4061.2018.121810
- Lutsenko, I., Oksanych, I., Shevchenko, I., Karabut, N. (2018). Development of the method for modeling operational processes for tasks related to decision making. Eastern-European Journal of Enterprise Technologies, 2 (4 (92)), 26–32. doi: https://doi.org/10.15587/1729-4061.2018.126446
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2019 Igor Lutsenko, Oleksii Mykhailenko, Oksana Dmytriieva, Oleksandr Rudkovsky, Denis Mospan, Dmitriy Kukharenko, Hanna Kolomits, Artem Kuzmenko
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.