The balance between innovation and human rights: problems of applying artificial intelligence

Автор(и)

  • Terezia Popovich Candidate of Law, Associate Professor, Associate Professor of the Department of Theory and History of State and Law State Higher Educational Institution “Uzhgorod National University”, Україна https://orcid.org/0000-0002-8333-3921

DOI:

https://doi.org/10.61345/1339-7915.2025.3.16

Ключові слова:

artificial intelligence, human rights, algorithmic discrimination, right to privacy, digital technologies

Анотація

The article provides a comprehensive analysis of the impact of artificial intelligence technologies on the human rights system in the context of the digital transformation of society. The main threats and challenges posed by AI systems for the implementation of fundamental human rights and freedoms are investigated. Particular attention is paid to the problems of algorithmic bias, which leads to a violation of the right to non-discrimination, mass collection and processing of personal data without proper control, which threatens the right to privacy, restrictions on freedom of expression through automated content moderation, as well as threats to the right to a fair trial in the case of using automated decision-making systems without proper transparency.

The specific risks associated with mass surveillance and biometric identification technologies, including facial recognition systems in public places, the use of AI in employment and the military, and the manipulation of public opinion through deepfake technologies, are analyzed. Three stages of assessing the impact of AI on human rights are considered: analysis of the quality of training data, risk assessment at the system design stage, and consideration of algorithmic interactions. It is argued that AI systems, by their nature, reproduce social biases embedded in past experience data and do not have the inherent ability to change their behavior in accordance with the evolution of ethical norms in society. The application of artificial intelligence in the financial sector is examined in detail, in particular in credit scoring systems, where algorithms analyze huge amounts of data about the applicant’s digital footprint. The problem of “network discrimination” is identified, when a person’s financial capabilities are assessed based on the characteristics of their social environment, which violates the principle of individual responsibility and can limit freedom of belief through self-censorship. The example of the practice of American companies shows how the use of AI systems in financial decision- making can both expand access to credit for representatives of marginalized communities and strengthen existing forms of discrimination.

Посилання

Ombudsman of Ukraine. (2024). Prava liudyny v epokhu shtuchnoho intelektu [Human rights in the age of artificial intelligence]. https://ombudsman.gov.ua/storage/app/media/uploaded-files/%D0%9F%D0%A0%D0%90%D0%92%D0%90%20%D0%9B%D0%AE%D0%94%D0%98%D0%9D%D0%98%20%D0%92%20%D0%95%D0%9F%D0%9E%D0%A5%D0%A3%20%D0%A8%D0%A2%D0%A3%D0%A7%D0%9D%D0%9E%D0%93%D0%9E%20%D0%86%D0%9D%D0%A2%D0%95%D0%9B%D0%95%D0%9A%D0%A2%D0%A3_compressed.pdf [in Ukrainian]

Kravchuk, S.M. (2024). Vplyv shtuchnoho intelektu na prava liudyny ta zahalni rekomendatsii dlia staloho vtilennia [The impact of artificial intelligence on human rights and general recommendations for sustainable implementation]. Visnyk Natsionalnoho universytetu «Lvivska politekhnika». Seriia: Yurydychni nauky [Bulletin of Lviv Polytechnic National University. Series: Legal Sciences], 3(43), 101–110. [in Ukrainian]

Van Est, R., Gerritsen, J., & Kool, L. (2017). Human rights in the robot age: Challenges arising from the use of robotics, artificial intelligence, and virtual and augmented reality (Expert report written for the Committee on Culture, Science, Education and Media of the Parliamentary Assembly of the Council of Europe). Rathenau Instituut. https://www.rathenau.nl/en/publication/human-rights-robot-age-challengesarising-use-robotics-artificial-intelligence-and [in English]

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [in English]

Kravchuk, S.M. (2024). Vplyv shtuchnoho intelektu na prava liudyny ta zahalni rekomendatsii dlia staloho vtilennia [The impact of artificial intelligence on human rights and general recommendations for sustainable implementation]. Visnyk Natsionalnoho universytetu «Lvivska politekhnika». Seriia: Yurydychni nauky [Bulletin of Lviv Polytechnic National University. Series: Legal Sciences], 3(43), 105–112. [in Ukrainian]

Liu, H.-Y., & Zawieska, K. (2017). A new human rights regime to address robotics and artificial intelligence. http://www.werobot2017.com/wp-content/uploads/2017/03/Liu-A-NewHuman-Rights-Regime-to-Address-Robotics-and-Artificial-Intelligence.pdf [in English]

Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, Ch., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks (Berkman Klein Center Research Publication No. 2018-6). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259344 [in English]

Russell, S. (2017). Artificial intelligence: The future is superintelligent. Nature, 548, 520–521. [in English]

Chander, A. (2017). The racist algorithm? Michigan Law Review, 115(6), 1023–1045. http://michiganlawreview.org/wp-content/uploads/2017/04/115MichLRev1023_Chander.pdf [in English]

Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006). Calibrating noise to sensitivity in private data analysis. In S. Halevi & T. Rabin (Eds.), Theory of cryptography (pp. 265–284). Springer Heidelberg. [in English]

Kozmenkov, M.H. (2025). Vykorystannia shtuchnoho intelektu v finansovykh ustanovakh v Ukraini: mozhlyvosti ta vyklyky dlia ekosystemy onlain servisiv [Use of artificial intelligence in financial institutions in Ukraine: Opportunities and challenges for the online services ecosystem]. Problemy suchasnykh transformatsii. Seriia: ekonomika ta upravlinnia [Problems of Modern Transformations. Series: Economics and Management], 19. https://file:///Users/a12345/Downloads/04_04_19_2025_Kozmenkov.pdf [in Ukrainian]

Lenddo. (2018, June 20). Credit scoring: The LenddoScore fact sheet. https://www.lenddo.com/pdfs/Lenddo_FS_CreditScoring_201705.pdf [in English]

Lieber, R. (2009, January 30). American Express watched where you shopped. The New York Times. https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html [in English]

Hurley, M., & Adebayo, J. (2016). Credit scoring in the era of big data. Yale Journal of Law & Technology, 18(1), 148–216.

Bielova, M.V., & Bielov, D.M. (2023). Implementatsiia shtuchnoho intelektu v dosudove rozsliduvannia kryminalnykh sprav: mizhnarodnyi dosvid [Implementation of artificial intelligence in pre-trial investigation of criminal cases: International experience]. Analitychno-porivnialne pravoznavstvo [Analytical and Comparative Jurisprudence], 2, 448–454. [in Ukrainian]

##submission.downloads##

Опубліковано

2025-10-21

Номер

Розділ

Статті