Development of a model for determining the necessary FPGA computing resource for placing a multilayer neural network on it

Authors

DOI:

https://doi.org/10.15587/1729-4061.2023.281731

Keywords:

FPGA, MLP, LSTM, CNN, SNN, GAN

Abstract

In this paper, the object of the research is the implementation of artificial neural networks (ANN) on FPGA. The problem to be solved is the construction of a mathematical model used to determine the compliance of FPGA computing resources with the requirements of neural networks, depending on their type, structure, and size. The number of its LUT (Look-up table – the basic FPGA structure that performs logical operations) is considered as a computing resource of the FPGA.

The search for the required mathematical model was carried out using experimental measurements of the required number of LUTs for the implementation on the FPGA of the following types of ANNs:

– MLP (Multilayer Perceptron);

– LSTM (Long Short-Term Memory);

– CNN (Convolutional Neural Network);

– SNN (Spiking Neural Network);

– GAN (Generative Adversarial Network).

Experimental studies were carried out on the FPGA model HAPS-80 S52, during which the required number of LUTs was measured depending on the number of layers and the number of neurons on each layer for the above types of ANNs. As a result of the research, specific types of functions depending on the required number of LUTs on the type, number of layers, and neurons for the most commonly used types of ANNs in practice were determined.

A feature of the results obtained is the fact that with a sufficiently high accuracy, it was possible to determine the analytical form of the functions that describe the dependence of the required number of LUT FPGA for the implementation of various ANNs on it. According to calculations, GAN uses 17 times less LUT compared to CNN. And SNN and MLP use 80 and 14 times less LUT compared to LSTM. The results obtained can be used for practical purposes when it is necessary to make a choice of any FPGA for the implementation of an ANN of a certain type and structure on it

Author Biographies

Bekbolat Medetov, S. Seifullin Kazakh Agro Technical Research University

PhD

Department of Radio Engineering, Electronics and Telecommunications

Tansaule Serikov, S. Seifullin Kazakh Agro Technical Research University

PhD

Department of Radio Engineering, Electronics and Telecommunications

Arai Tolegenova, S. Seifullin Kazakh Agro Technical Research University

Candidate of Technical Sciences

Department of Radio Engineering, Electronics and Telecommunications

Dauren Zhexebay, Al-Farabi Kazakh National University

PhD, Senior Lecturer

Department of Solid State Physics and Nonlinear Physics

Asset Yskak, Nazarbayev University

Master

Department of Computer Science

Timur Namazbayev, Al-Farabi Kazakh National University

Master, Senior Lecturer

Department of Solid State Physics and Nonlinear Physics

Nurtay Albanbay, Satbayev University

Master

Department of Cybersecurity, Information Processing and Storage

References

  1. Ádám, N., Baláž, A., Pietriková, E., Chovancová, E., Feciľak, P. (2018). The Impact of Data Representationson Hardware Based MLP Network Implementation. Acta Polytechnica Hungarica, 15 (2). doi: https://doi.org/10.12700/aph.15.1.2018.2.4
  2. Gaikwad, N. B., Tiwari, V., Keskar, A., Shivaprakash, N. C. (2019). Efficient FPGA Implementation of Multilayer Perceptron for Real-Time Human Activity Classification. IEEE Access, 7, 26696–26706. doi: https://doi.org/10.1109/access.2019.2900084
  3. Westby, I., Yang, X., Liu, T., Xu, H. (2021). FPGA acceleration on a multi-layer perceptron neural network for digit recognition. The Journal of Supercomputing, 77 (12), 14356–14373. doi: https://doi.org/10.1007/s11227-021-03849-7
  4. Bai, L., Zhao, Y., Huang, X. (2018). A CNN Accelerator on FPGA Using Depthwise Separable Convolution. IEEE Transactions on Circuits and Systems II: Express Briefs, 65 (10), 1415–1419. doi: https://doi.org/10.1109/tcsii.2018.2865896
  5. Zhang, N., Wei, X., Chen, H., Liu, W. (2021). FPGA Implementation for CNN-Based Optical Remote Sensing Object Detection. Electronics, 10 (3), 282. doi: https://doi.org/10.3390/electronics10030282
  6. He, D., He, J., Liu, J., Yang, J., Yan, Q., Yang, Y. (2021). An FPGA-Based LSTM Acceleration Engine for Deep Learning Frameworks. Electronics, 10 (6), 681. doi: https://doi.org/10.3390/electronics10060681
  7. Shrivastava, N., Hanif, M. A., Mittal, S., Sarangi, S. R., Shafique, M. (2021). A survey of hardware architectures for generative adversarial networks. Journal of Systems Architecture, 118, 102227. doi: https://doi.org/10.1016/j.sysarc.2021.102227
  8. Wang, D., Shen, J., Wen, M., Zhang, C. (2019). Efficient Implementation of 2D and 3D Sparse Deconvolutional Neural Networks with a Uniform Architecture on FPGAs. Electronics, 8 (7), 803. doi: https://doi.org/10.3390/electronics8070803
  9. Han, J., Li, Z., Zheng, W., Zhang, Y. (2020). Hardware implementation of spiking neural networks on FPGA. Tsinghua Science and Technology, 25 (4), 479–486. doi: https://doi.org/10.26599/tst.2019.9010019
  10. Ju, X., Fang, B., Yan, R., Xu, X., Tang, H. (2020). An FPGA Implementation of Deep Spiking Neural Networks for Low-Power and Fast Classification. Neural Computation, 32 (1), 182–204. doi: https://doi.org/10.1162/neco_a_01245
  11. Medetov, B., Serikov, T., Tolegenova, A., Dauren, Z. (2022). Comparative analysis of the performance of generating cryptographic ciphers on the CPU and FPGA. Journal of Theoretical and Applied Information Technology, 100 (15), 4813–4824. Available at: http://www.jatit.org/volumes/Vol100No15/24Vol100No15.pdf
  12. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., Bharath, A. A. (2018). Generative Adversarial Networks: An Overview. IEEE Signal Processing Magazine, 35 (1), 53–65. doi: https://doi.org/10.1109/msp.2017.2765202
Development of a model for determining the necessary FPGA computing resource for placing a multilayer neural network on it

Downloads

Published

2023-08-31

How to Cite

Medetov, B., Serikov, T., Tolegenova, A., Zhexebay, D., Yskak, A., Namazbayev, T., & Albanbay, N. (2023). Development of a model for determining the necessary FPGA computing resource for placing a multilayer neural network on it. Eastern-European Journal of Enterprise Technologies, 4(4 (124), 34–45. https://doi.org/10.15587/1729-4061.2023.281731

Issue

Section

Mathematics and Cybernetics - applied aspects