THE FUNDAMENTALS AND COMPONENTS OF ARTIFICIAL INTELLIGENCE

Authors

DOI:

https://doi.org/10.63978/3083-6476.2025.1.1.08

Keywords:

artificial intelligence, neural network, model, cybersecurity

Abstract

The article is devoted to highlighting the basics and components of artificial intelligence. The science of artificial intelligence began to develop in the 40s of the last century, when a group of scientists from various fields came to the conclusion that it was possible to create an artificial brain. At that time, its theoretical and mathematical foundations and components began to take shape. Research in the field of neuroscience has shown that the brain is a network of neurons that exchange electrical signals on an all-or-nothing, 0 or 1 basis. The cybernetics of the American mathematician Norbert Wiener described the basics of control and stability in electrical networks. The information theory of the American electrical engineer and mathematician Claude Shannon made it possible to learn about digital signals. The English mathematician and cryptographer Alan Turing theorized the possibility of any computation using digital operations. The American neuropsychologist and neurophysiologist Warren Ma-Culloch and the neurolinguist and mathematician Walter Pitts analyzed networks consisting of idealized artificial neurons and demonstrated how they could perform the simplest logical functions. Canadian physiologist and neuropsychologist Donald Gebb proposed a theory of learning based on the conjuncture of neural networks and synapses that can strengthen or weaken over time. These scientists were the first to describe what researchers would later call a neural network. In 1957, Frank Rosenblatt proposed the Perceptron, a mathematical or computer model of the brain's perception of information (a cybernetic model of the brain), first implemented as the Mark-1 electronic machine in 1960. The perceptron was one of the first models of neural networks, which is essentially a mathematical model that emulates the human nervous system. Neural networks are considered: Feedforward, Recurrent, Convolutional, Long Short-Term Memory, Convolutional Recurrent Neural Networks, Generative Adversarial Networks, Hopfield Networks, Boltzmann Machines, Memory Networks, Generative Adversarial Networks. It is shown that neural networks are a modern powerful tool of artificial intelligence that can model and simulate the work of the human brain. And this is a practical mechanism that ensures the implementation of Machine Learning and Deep Learning technologies. Thus, artificial intelligence models based on neural networks are used to solve complex problems that cannot always be solved using mathematical formulas or rules. They are capable of learning from large amounts of data and recognizing complex dependencies in this data. They can be used in various technological
products and industries, including cybersecurity.

Author Biography

Yurii Lysetsyi, Yevgeny Bereznyak Military Academy, Kyiv

Doctor in Technical Sciences,
associate professor
professor of the department

References

The History of Artificial Intelligence from the 1950s to Today. URL: https://www.freecodecamp.org/news/the-history-of-ai/ (дата звернення 24.10.2024).

Лисецький Ю.М., Сокур В.В. Теоретичні та математичні основи штучного інтелекту. Вісник воєнної розвідки. 2024. Vol. 85. С. 51-56.

Norbert Wiener. Cybernetics or Control and Communication in the Animal and the Machine, Paris – Cambridge, Mass., 1948. 194 p.

Shannon C. E. A Mathematical Theory of Communication. Bell System Technical Journal. 1948. № 27. Р. 379-423.

Turing A. M. Computing Machinery and Intelligence. Mind. 1950. № 49. Р. 433-460.

Warren S. McCulloch and Walter Pitts, A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics. 1943. Vol. 5. Р. 115-133.

Donald Olding Hebb The Organization of Behavior: A Neuropsychological Theory. D.O. Hebb: The Organization of Behavior, Wiley: New York, 1949. 337 р.

Rosenblatt F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review. 1958. Vol. 65. № 6. Р. 386-408.

Калбазов Д.Й., Данченко О.І., Лисецький Ю.М. Нейромережі. Розвиток та перспективи. Математичні машини і системи. 2024. № 2. С. 24–32.

What is Perceptron? A Beginners Guide for 2023. URL: https://www.simplilearn.com/tutorials/deep-learning-tutorial/perceptron (дата звернення 15.11.2024).

архитектур нейронных сетей для решения задач NLP. URL: https://neurohive.io/ru/osnovy-data-science/7-arhitektur-nejronnyh-setej-nlp/ (дата звернення 23.09.2024).

Які існують типи нейронних мереж? URL: https://devzone.org.ua/qna/iaki-isnuiut-tipi-neironnix-merez.

What is machine learning? URL: https://www.ibm.com/topics/machine-learning (дата звернення 17.012.2024).

What is deep learning and how does it work? URL: https://www.techtarget.com/ searchenterpriseai/definition/deep-learning-deep-neural-network (дата звернення 12.12.2024).

What Is Deep Learning? How It Works, Techniques. URL: https://www.mathworks.com/discovery/deep-learning.html (дата звернення 12.12.2024).

ChatGPT від OpenAI на що він здатен та як ним скористатися? URL: https://www.imena.ua/blog/chatgpt/?gclid=EAIaIQobChMIp-mGo9f8gQMVvJiDBx22rAZYEAAYASAAEgJcnPD_BwE (дата звернення 15.12.2024).

Published

2025-05-19

Issue

Section

ІНФОРМАЦІЙНІ СИСТЕМИ І ТЕХНОЛОГІЇ