The role of a new generation of systems with artificial intelligence in human development (on the example of the ChatGPT network)
DOI:
https://doi.org/10.31498/2225-6733.49.1.2024.321205Keywords:
artificial intelligence, ChatGPT, vector of entropy change, subjectivity ChatGPT, user objectivityAbstract
The paper examines the «pro-contra» in the relationship between human and machine, implying artificial intelligence (AI) systems by the latter. For the first time, attention is paid to such an aspect of the issue as the subjectivity and objectivity of parts of the human-machine system. development of AI on the example of GPT4 and especially GPT5, more and more preferences can be attributed to AI systems, and a person gradually, and not for the first time, loses priorities in competition with the «machine». Such a seemingly unshakable human quality as cognition is increasingly reflected in new AI systems and, in particular, in GPT. Its components such as visuality, sensuality, and human hearing are gradually reflected in the «digitized» AI functions. One of the urgent reasons for such changes is the change in the role of a person in the system «man-machine-environment» from his traditional subjectivity to objectivity, and the gradual loss of opportunities to influence AI systems. It is shown that the main goal is the ability to self-develop AI, the development of new knowledge with the help of known at this stage, is achieved by a simple increase in super memory and high speed of its processing using a specialized regenerative neural network «Transformer». These are contributing to the formation of a specialized logic of instantaneous enumeration of options, which turns out to be preferable to the cognitive selective logic of a person and can mean, for example, a transition of activity, and even subjectivity, from a person towards AI. Such a transition can take place only in one predictable case: when AI finds internal opportunities for comparison with a person in terms of his cognitive qualities
References
Crawford K. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021. 288 p. DOI: https://doi.org/10.2307/j.ctv1ghv45t.
Lanier J. Ten Arguments for Deleting Your Social Media Accounts Right Now. Henry Holt and Co., 2018. 160 p.
Dreyfus H. L. What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press. 1992. 408 p.
Chomsky N. The False Promise of ChatGPT. 2023. URL: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html (дата звернення: 08.05.2024).
Marcus G., Devis E. Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon, 2019. 273 p.
Mitchell M. Artificial Intelligence: A Guide for Thinking Humans. Farrar, Straus and Giroux, 2019. 336 p.
LeCun Y. Self-Supervised Learning: The Dark Matter of Intelligence. 2020. URL: https://ai.meta.com/blog/self-supervised-learning-the-dark-matter-of-intelligence/ (дата звернення: 16.07.2024).
Attention is all you need / A. Vaswani et al. NIPS 2017 : 31st Conference on Neural Information Processing Systems, Long Beach, CA, USA, 5-10 December 2017. Pp. 1-11. DOI: https://doi.org/10.48550/arXiv.1706.03762.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding / Devlin J., Chang M.-W., Lee K., Toutanova K. Proceedings of NAACL-HLT 2019, Minneapolis, Minnesota, 2 June – 7 June 2019. Pp. 4171-4186. DOI: https://doi.org/10.48550/arXiv.1810.04805.
Language Models are Few-Shot Learners / Brown T. et al. NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 6-12 December 2020. Vol. 33. Pp. 1877-1901. DOI: https://doi.org/10.48550/arXiv.2005.14165.
Understanding and improving Layer Normalization / J. Xu et al. NeurIPS 2019 : 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, 8-14 December 2019. Pp. 1-11. DOI: https://doi.org/10.48550/arXiv.1911.07013.
Markoff J. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. Ecco, 2015. 400 p.
Kurzweil R. The Singularity is Near. Viking, 2005. 652 p.
Рrigogine I., George C. The Second Law as a Selection Principle: The Microscopic Theory of Dissipative Processes in Quantum Systems. Proceeding of the National Academy of Science. 1983. Vol. 80. Pp. 4590-45945. DOI: https://doi.org/10.1073/pnas.80.14.4590.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
The journal «Reporter of the Priazovskyi State Technical University. Section: Technical sciences» is published under the CC BY license (Attribution License).
This license allows for the distribution, editing, modification, and use of the work as a basis for derivative works, even for commercial purposes, provided that proper attribution is given. It is the most flexible of all available licenses and is recommended for maximum dissemination and use of non-restricted materials.
Authors who publish in this journal agree to the following terms:
1. Authors retain the copyright of their work and grant the journal the right of first publication under the terms of the Creative Commons Attribution License (CC BY). This license allows others to freely distribute the published work, provided that proper attribution is given to the original authors and the first publication of the work in this journal is acknowledged.
2. Authors are allowed to enter into separate, additional agreements for non-exclusive distribution of the work in the same form as published in this journal (e.g., depositing it in an institutional repository or including it in a monograph), provided that a reference to the first publication in this journal is maintained.







