Development of a method for structural optimization of a neural network based on the criterion of resource utilization efficiency

Authors

DOI:

https://doi.org/10.15587/1729-4061.2019.164591

Keywords:

artificial neural network, structure optimization, approximation of functions, efficiency criterion

Abstract

At present, mathematical models in the form of artificial neural networks (ANNs) are widely used to solve problems on approximation. Application of this technology involves a two-stage approach that implies determining the structure for a model of ANN and the implementation of its training. Completion of the learning process makes it possible to derive a result of the approximation whose accuracy is defined by the complexity of ANN structure. In other words, increasing the ANN complexity allows obtaining a more precise result of training.

In this case, obtaining the model of ANN that implements approximation at the assigned accuracy is defined as the process of optimization.

However, an increase in the ANN complexity leads not only to the improved accuracy, but prolongs the time of computation as well.

Thus, the indicator «assigned accuracy» cannot be used in the problems on determining the optimum neural network architecture. This relates to that the result of the model structure selection and the process of its training, based on the required accuracy of approximation, might be obtained over a period of time unacceptable for the user.

To solve the task on structural identification of a neural network, the approach is used in which the model’s configuration is determined based on a criterion of efficiency. The process of implementation of the constructed method implies adjusting a time factor related to solving the problem and the accuracy of approximation.

The proposed approach makes it possible to substantiate the principle of choosing the structure and parameters of a neural network based on the maximum value for the indicator of effective use of resources

Author Biographies

Igor Lutsenko, Kremenchuk Mykhailo Ostrohradskyi National University Pershotravneva str., 20, Kremenchuk, Ukraine, 39600

Doctor of Technical Sciences, Professor

Department of Information and Control Systems

Oleksii Mykhailenko, Kryvyi Rih National University Vitaliya Matusevycha str., 11, Kryvyi Rih, Ukraine, 50027

PhD, Associate Professor

Department of Power Supply and Energy Management

Oksana Dmytriieva, Kharkiv National Automobile and Highway University Yaroslava Mudroho str., 25, Kharkiv, Ukraine, 61002

PhD, Associate Professor

Department of Management and Administration

Oleksandr Rudkovsky, Warsaw University of Life Sciences Nowoursynowska str., 166, Warszawa, Poland, 02-787

Doctor of Еconomic Sciences, Professor

Department of Geoengineering

Denis Mospan, Kremenchuk Mykhailo Ostrohradskyi National University Pershotravneva str., 20, Kremenchuk, Ukraine, 39600

PhD, Associate Professor

Department of Electronic Devices

Dmitriy Kukharenko, Kremenchuk Mykhailo Ostrohradskyi National University Pershotravneva str., 20, Kremenchuk, Ukraine, 39600

PhD, Associate Professor

Department of Electronic Devices

Hanna Kolomits, Kryvyi Rih National University Vitaliya Matusevycha str., 11, Kryvyi Rih, Ukraine, 50027

Assistant

Department of Electromechanics

Artem Kuzmenko, Kryvyi Rih National University Vitaliya Matusevycha str., 11, Kryvyi Rih, Ukraine, 50027

Senior Lecturer

Department of Electromechanics

References

  1. Gorban', A. N. (1998). Generalized approximation theorem and computational capabilities of neural networks. Siberian J. of Numer. Mathematics, 1 (1), 11–24.
  2. Nelles, O. (2001). Nonlinear System Identification. From Classical Approaches to Neural Networks and Fuzzy Models. Springer, 785. doi: https://doi.org/10.1007/978-3-662-04323-3
  3. Diniz, P. S. R. (2008). Adaptive Filtering: Algorithms and Practical Implementation. Springer. doi: https://doi.org/10.1007/978-0-387-68606-6
  4. Mykhailenko, O. (2015). Research of adaptive algorithms of laguerre model parametrical identification at approximation of ore breaking process dynamics. Metallurgical and Mining Industry, 6, 109–117.
  5. Mykhailenko, O. (2015). Ore Crushing Process Dynamics Modeling using the Laguerre Model. Eastern-European Journal of Enterprise Technologies, 4 (4 (76)), 30–35. doi: https://doi.org/10.15587/1729-4061.2015.47318
  6. Haykin, S. (2009). Neural Networks and Learning Machines. Pearson, 938.
  7. Yang, J., Ma, J., Berryman, M., Perez, P. (2014). A structure optimization algorithm of neural networks for large-scale data sets. 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). doi: https://doi.org/10.1109/fuzz-ieee.2014.6891662
  8. Han, S., Pool, J., Tran, J., Dally, W. (2015). Learning both Weights and Connections for Efficient Neural Network. Proceedings of Advances in Neural Information Processing Systems.
  9. Liu, C., Zhang, Z., Wang, D. (2014). Pruning deep neural networks by optimal brain damage. INTERSPEECH 2014, 1092–1095.
  10. Tresp, V., Neuneier, R., Zimmermann, H. G. (1996). Early Brain Damage. Proceedings of the 9th International Conference on Neural Information Processing Systems NIPS96, 669–675.
  11. Christiansen, N. H., Job, J. H., Klyver, K., Hogsbrg, J. (2012). Optimal Brain Surgeon on Artificial Neural Networks in Nonlinear Structural Dynamics. In Proceedings of 25th Nordic Seminar on Computational Mechanics.
  12. Babaeizadeh, M., Smaragdis, P., Campbell, R. H. (2016). NoiseOut: A Simple Way to Prune Neural Networks. Proceedings of 29th Conference on Neural Information Processing Systems (NIPS 2016). Barcelona.
  13. He, T., Fan, Y., Qian, Y., Tan, T., Yu, K. (2014). Reshaping deep neural network for fast decoding by node-pruning. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). doi: https://doi.org/10.1109/icassp.2014.6853595
  14. Takeda, R., Nakadai, K., Komatani, K. (2017). Node Pruning Based on Entropy of Weights and Node Activity for Small-Footprint Acoustic Model Based on Deep Neural Networks. Interspeech 2017, 1636–1640. doi: https://doi.org/10.21437/interspeech.2017-779
  15. Islam, M., Sattar, A., Amin, F., Yao, X., Murase, K. (2009). A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39 (3), 705–722. doi: https://doi.org/10.1109/tsmcb.2008.2008724
  16. Arifovic, J., Gençay, R. (2001). Using genetic algorithms to select architecture of a feedforward artificial neural network. Physica A: Statistical Mechanics and its Applications, 289 (3-4), 574–594. doi: https://doi.org/10.1016/s0378-4371(00)00479-9
  17. Fiszelew, A., Britos, P., Ochoa, A., Merlino, H., Fernández, E., García-Martínez, R. (2007). Finding Optimal Neural Network Architecture using Genetic Algorithms. Advances in Computer Science and Engineering Research in Computing Science, 27, 15–24.
  18. Yang, S.-H., Chen, Y.-P. (2012). An evolutionary constructive and pruning algorithm for artificial neural networks and its prediction applications. Neurocomputing, 86, 140–149. doi: https://doi.org/10.1016/j.neucom.2012.01.024
  19. Lutsenko, I. (2016). Definition of efficiency indicator and study of its main function as an optimization criterion. Eastern-European Journal of Enterprise Technologies, 6 (2 (84)), 24–32. doi: https://doi.org/10.15587/1729-4061.2016.85453
  20. Lutsenko, I., Fomovskaya, E., Oksanych, I., Koval, S., Serdiuk, O. (2017). Development of a verification method of estimated indicators for their use as an optimization criterion. Eastern-European Journal of Enterprise Technologies, 2 (4 (86)), 17–23. doi: https://doi.org/10.15587/1729-4061.2017.95914
  21. Lutsenko, I., Fomovskaya, O., Vihrova, E., Serdiuk, O., Fomovsky, F. (2018). Development of test operations with different duration in order to improve verification quality of effectiveness formula. Eastern-European Journal of Enterprise Technologies, 1 (4 (91)), 42–49. doi: https://doi.org/10.15587/1729-4061.2018.121810
  22. Lutsenko, I., Oksanych, I., Shevchenko, I., Karabut, N. (2018). Development of the method for modeling operational processes for tasks related to decision making. Eastern-European Journal of Enterprise Technologies, 2 (4 (92)), 26–32. doi: https://doi.org/10.15587/1729-4061.2018.126446

Published

2019-04-18

How to Cite

Lutsenko, I., Mykhailenko, O., Dmytriieva, O., Rudkovsky, O., Mospan, D., Kukharenko, D., Kolomits, H., & Kuzmenko, A. (2019). Development of a method for structural optimization of a neural network based on the criterion of resource utilization efficiency. Eastern-European Journal of Enterprise Technologies, 2(4 (98), 57–65. https://doi.org/10.15587/1729-4061.2019.164591

Issue

Section

Mathematics and Cybernetics - applied aspects