Metaheuristic optimization algorithm based on the two-step Adams-Bashforth method in training multi-layer perceptrons
DOI:
https://doi.org/10.15587/1729-4061.2022.254023Keywords:
algorithm, Adams-Bashforth method, approximation, classification, global, metaheuristic, multilayer, perceptron, training, optimizationAbstract
The proposed metaheuristic optimization algorithm based on the two-step Adams-Bashforth scheme (MOABT) was first used in this paper for Multilayer Perceptron Training (MLP). In computer science and mathematical examples, metaheuristic is high-level procedures or guidelines designed to find, devise, or select algorithmic research methods to obtain high-quality solutions to an example problem, especially if the information is insufficient or incomplete, or if computational capacity is limited. Many metaheuristic methods include some stochastic example operations, which means that the resulting solution is dependent on the random variables that are generated during the search. The use of higher evidence can frequently find good solutions with less computational effort than iterative methods and algorithms because it searches a broad range of feasible solutions at the same time. Therefore, metaheuristic is a useful approach to solving example problems. There are several characteristics that distinguish metaheuristic strategies for the research process. The goal is to efficiently explore the search perimeter to find the best and closest solution. The techniques that make up metaheuristic algorithms range from simple searches to complex learning processes. Eight model data sets are used to calculate the proposed approach, and there are five classification data sets and three proximate job data sets included in this set. The numerical results were compared with those of the well-known evolutionary trainer Gray Wolf Optimizer (GWO). The statistical study revealed that the MOABT algorithm can outperform other algorithms in terms of avoiding local optimum and speed of convergence to global optimum. The results also show that the proposed problems can be classified and approximated with high accuracy
Supporting Agency
- We extend our sincere thanks and appreciation to the University of Mosul, the College of Computer Science and Mathematics, and the College of Education for Pure Sciences, for their cooperation and support for the completion of this paper.
References
- Yilmaz, M., Kayabasi, E., Akbaba, M. (2019). Determination of the effects of operating conditions on the output power of the inverter and the power quality using an artificial neural network. Engineering Science and Technology, an International Journal, 22 (4), 1068–1076. doi: https://doi.org/10.1016/j.jestch.2019.02.006
- Vahora, S. A., Chauhan, N. C. (2019). Deep neural network model for group activity recognition using contextual relationship. Engineering Science and Technology, an International Journal, 22 (1), 47–54. doi: https://doi.org/10.1016/j.jestch.2018.08.010
- Maleki, Ghazvini, Ahmadi, Maddah, Shamshirband. (2019). Moisture Estimation in Cabinet Dryers with Thin-Layer Relationships Using a Genetic Algorithm and Neural Network. Mathematics, 7 (11), 1042. doi: https://doi.org/10.3390/math7111042
- Farzaneh-Gord, M., Mohseni-Gharyehsafa, B., Arabkoohsar, A., Ahmadi, M. H., Sheremet, M. A. (2020). Precise prediction of biogas thermodynamic properties by using ANN algorithm. Renewable Energy, 147, 179–191. doi: https://doi.org/10.1016/j.renene.2019.08.112
- Basheer, I. A., Hajmeer, M. (2000). Artificial neural networks: fundamentals, computing, design, and application. Journal of Microbiological Methods, 43 (1), 3–31. doi: https://doi.org/10.1016/s0167-7012(00)00201-3
- Li, J., Cheng, J., Shi, J., Huang, F. (2012). Brief Introduction of Back Propagation (BP) Neural Network Algorithm and Its Improvement. Advances in Computer Science and Information Engineering, 553–558. doi: https://doi.org/10.1007/978-3-642-30223-7_87
- Aljarah, I., Faris, H., Mirjalili, S. (2016). Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Computing, 22 (1), 1–15. doi: https://doi.org/10.1007/s00500-016-2442-1
- Wang, L., Zeng, Y., Chen, T. (2015). Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Systems with Applications, 42 (2), 855–863. doi: https://doi.org/10.1016/j.eswa.2014.08.018
- Yang, X. (2010). Nature-Inspired Metaheuristic Algorithms. Luniver Press, 75.
- Wang, G.-G., Gandomi, A. H., Alavi, A. H., Hao, G.-S. (2013). Hybrid krill herd algorithm with differential evolution for global numerical optimization. Neural Computing and Applications, 25 (2), 297–308. doi: https://doi.org/10.1007/s00521-013-1485-9
- Reed, R., Marks, R. J. (1999). Neural Smithing. The MIT Press. doi: https://doi.org/10.7551/mitpress/4937.001.0001
- Dey, N., Ashour, A. S., Bhattacharyya, S. (Eds.) (2020). Applied Nature-Inspired Computing: Algorithms and Case Studies. Springer, 275. doi: https://doi.org/10.1007/978-981-13-9263-4
- Dey, N. (Ed.) (2018). Advancements in Applied Metaheuristic Computing. IGI Global. doi: https://doi.org/10.4018/978-1-5225-4151-6
- Gandomi, A. H., Yang, X.-S., Alavi, A. H. (2013). Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Engineering with Computers, 29 (1), 17–35. doi: https://doi.org/10.1007/s00366-011-0241-y
- Mirjalili, S., Mirjalili, S. M., Lewis, A. (2014). Grey Wolf Optimizer. Advances in Engineering Software, 69, 46–61. doi: https://doi.org/10.1016/j.advengsoft.2013.12.007
- Mirjalili, S., Lewis, A. (2016). The Whale Optimization Algorithm. Advances in Engineering Software, 95, 51–67. doi: https://doi.org/10.1016/j.advengsoft.2016.01.008
- Yang, X.-S. (2009). Firefly Algorithms for Multimodal Optimization. Lecture Notes in Computer Science, 169–178. doi: https://doi.org/10.1007/978-3-642-04944-6_14
- Yang, X., Hossein Gandomi, A. (2012). Bat algorithm: a novel approach for global engineering optimization. Engineering Computations, 29 (5), 464–483. doi: https://doi.org/10.1108/02644401211235834
- Yu, J., Wang, S., Xi, L. (2008). Evolving artificial neural networks using an improved PSO and DPSO. Neurocomputing, 71 (4-6), 1054–1060. doi: https://doi.org/10.1016/j.neucom.2007.10.013
- Valian, E., Mohanna, S., Tavakoli, S. (2011). Improved Cuckoo Search Algorithm for Feed forward Neural Network Training. International Journal of Artificial Intelligence & Applications, 2 (3), 36–43. doi: https://doi.org/10.5121/ijaia.2011.2304
- Jaddi, N. S., Abdullah, S., Hamdan, A. R. (2015). Multi-population cooperative bat algorithm-based optimization of artificial neural network model. Information Sciences, 294, 628–644. doi: https://doi.org/10.1016/j.ins.2014.08.050
- Mirjalili, S. (2015). How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Applied Intelligence, 43 (1), 150–161. doi: https://doi.org/10.1007/s10489-014-0645-7
- Nandy, S., Sarkar, P. P., Das, A. (2012). Analysis of a nature inspired firefly algorithm based back-propagation neural network training. arXiv. doi: https://doi.org/10.48550/arXiv.1206.5360
- Heidari, A. A., Faris, H., Aljarah, I., Mirjalili, S. (2018). An efficient hybrid multilayer perceptron neural network with grasshopper optimization. Soft Computing, 23 (17), 7941–7958. doi: https://doi.org/10.1007/s00500-018-3424-2
- Xu, J., Yan, F. (2018). Hybrid Nelder–Mead Algorithm and Dragonfly Algorithm for Function Optimization and the Training of a Multilayer Perceptron. Arabian Journal for Science and Engineering, 44 (4), 3473–3487. doi: https://doi.org/10.1007/s13369-018-3536-0
- Tang, R., Fong, S., Dey, N. (2018). Metaheuristics and Chaos Theory. Chaos Theory. doi: https://doi.org/10.5772/intechopen.72103
- Karaagac, B. (2019). Two step Adams Bashforth method for time fractional Tricomi equation with non-local and non-singular Kernel. Chaos, Solitons & Fractals, 128, 234–241. doi: https://doi.org/10.1016/j.chaos.2019.08.007
- Butcher, J. C. (2016). Numerical Methods for Ordinary Differential Equations. John Wiley & Sons. doi: https://doi.org/10.1002/9781119121534
- Ahmadianfar, I., Heidari, A. A., Gandomi, A. H., Chu, X., Chen, H. (2021). RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Systems with Applications, 181, 115079. doi: https://doi.org/10.1016/j.eswa.2021.115079
- Belew, R. K., McInerney, J., Schraudolph, N. N. (1990). Evolving networks: Using the genetic algorithm with connectionist learning. CSE Technical report #CS90-174. Available at: https://nic.schraudolph.org/pubs/BelMcISch92.pdf
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2022 Hisham M. Khudhur, Kais I. Ibraheem
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.