Formalization of text prompts to artificial intelligence systems

Authors

DOI:

https://doi.org/10.15587/1729-4061.2025.335473

Keywords:

prompt formalization, large language models, artificial intelligence, structured templates, wargames

Abstract

This study's object is the process that formalizes text prompts to large language models of artificial intelligence for the purpose of automatically generating action cards for hexagonal tabletop wargames. The task relates to the ambiguity (42% of terms are misinterpreted), contextual incompleteness (37%), and syntactic variability (21%) of natural language prompts, resulting in unhelpful and unpredictable responses.

To address this problem, a conceptual-practical model has been proposed, which combines structured prompt templates, a localized glossary of key terms, as well as clear instructions on response format.

Practical verification was carried out by generating a set of action cards for solo wargames on modern large language models by generating over 100 prompts and analyzing more than 300 responses. The experiments demonstrated that the formalized prompts reduced the total error rate by 58%, as well as increased the relevance of responses from 55–65% to 88–92%. The average time for preparing prompts was reduced by 25–40%. The “d6-table” templates ensured the stability of the output format in 90–95% of cases while JSON structures provided stability in 85–90% of cases. The glossary and structure definition integrated into the prompts minimized semantic discrepancies and syntactic errors.

A special feature of the proposed template structures for prompts is adaptability to different subject areas through the use of a description language specific to each of these areas. The research results have practical value for automating game content development processes and could be adapted for other subject areas where accuracy, consistency, and structure of language model responses are important.

The proposed systematic approach facilitates the automation of complex content development with a guaranteed increase in the quality and predictability of responses from large language models.

Author Biographies

Vladyslav Oliinyk, Kharkiv National University of Radio Electronics

PhD Student

Department of Media Systems and Technologies

Andrii Biziuk, Kharkiv National University of Radio Electronics

PhD, Associate Professor, Professor

Department of Media Systems and Technologies

Zhanna Deineko, Kharkiv National University of Radio Electronics

PhD

Department of Media Systems and Technologies

Viktor Chelombitko, Kharkiv National University of Radio Electronics

PhD

Department of Media Systems and Technologies

References

  1. Poola, I. (2023). Overcoming ChatGPTs inaccuracies with Pre-Trained AI Prompt Engineering Sequencing Process. International Journal of Technology and Emerging Sciences, 3 (3), 16–19. Available at: https://www.researchgate.net/profile/Indrasen-Poola/publication/374153552_Overcoming_ChatGPTs_inaccuracies_with_Pre-Trained_AI_Prompt_Engineering_Sequencing_Process/links/65109c34c05e6d1b1c2d6ae9/Overcoming-ChatGPTs-inaccuracies-with-Pre-Trained-AI-Prompt-Engineering-Sequencing-Process.pdf
  2. Prompt engineering. OpenAI. Available at: https://platform.openai.com/docs/guides/text#prompt-engineering
  3. Diab, M., Herrera, J., Chernow, B., Chernow, B., Mao, C. (2022). Stable Diffusion Prompt Book. OpenArt. Available at: https://cdn.openart.ai/assets/Stable%20Diffusion%20Prompt%20Book%20From%20OpenArt%2011-13.pdf
  4. Shah, C. (2025). From Prompt Engineering to Prompt Science with Humans in the Loop. Communications of the ACM. https://doi.org/10.1145/3709599
  5. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. arXiv. https://doi.org/10.48550/arXiv.2205.11916
  6. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N. et al. (2017). Attention Is All You Need. arXiv. https://doi.org/10.48550/arXiv.1706.03762
  7. Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G. (2023). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55 (9), 1–35. https://doi.org/10.1145/3560815
  8. Wei, J., Zhou, D. (2022). Language Models Perform Reasoning via Chain of Thought. Google Research. Available at: https://research.google/blog/language-models-perform-reasoning-via-chain-of-thought/
  9. Zaghir, J., Naguib, M., Bjelogrlic, M., Névéol, A., Tannier, X., Lovis, C. (2024). Prompt Engineering Paradigms for Medical Applications: Scoping Review. Journal of Medical Internet Research, 26, e60501. https://doi.org/10.2196/60501
  10. Pawar, V., Gawande, M., Kollu, A., Bile, A. S. (2024). Exploring the Potential of Prompt Engineering: A Comprehensive Analysis of Interacting with Large Language Models. 2024 8th International Conference on Computing, Communication, Control and Automation (ICCUBEA), 1–9. https://doi.org/10.1109/iccubea61740.2024.10775016
  11. Prompt engineering: overview and guide. Google Cloud. Available at: https://cloud.google.com/discover/what-is-prompt-engineering
  12. Sui, Y., Zhou, M., Zhou, M., Han, S., Zhang, D. (2024). Table Meets LLM: Can Large Language Models Understand Structured Table Data? A Benchmark and Empirical Study. Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 645–654. https://doi.org/10.1145/3616855.3635752
  13. Crabtree, M. (2024). What is Prompt Engineering? A Detailed Guide For 2025. DataCamp. Available at: https://www.datacamp.com/blog/what-is-prompt-engineering-the-future-of-ai-communication
  14. Kryazhych, O., Vasenko, O., Isak, L., Babak, O., Grytsyshyn, V. (2024). Method of constructing requests to chat-bots on the base of artificial intelligence. International Scientific Technical Journal "Problems of Control and Informatics", 69 (2), 84–96. https://doi.org/10.34229/1028-0979-2024-2-7
  15. Wang, D. Y.-B., Shen, Z., Mishra, S. S., Xu, Z., Teng, Y., Ding, H. (2025). SLOT: Structuring the Output of Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2505.04016
  16. Shorten, C., Pierse, C., Smith, T. B., Cardenas, E., Sharma, A., Trengrove, J., van Luijt, B. (2024). StructuredRAG: JSON Response Formatting with Large Language Models. arXiv. https://doi.org/10.48550/arXiv.2408.11061
  17. Tang, X., Zong, Y., Phang, J., Zhao, Y., Zhou, W., Cohan, A., Gerstein, M. (2024). Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), 12–34. https://doi.org/10.18653/v1/2024.naacl-short.2
  18. Sasaki, Y., Washizaki, H., Li, J., Yoshioka, N., Ubayashi, N., Fukazawa, Y. (2025). Landscape and Taxonomy of Prompt Engineering Patterns in Software Engineering. IT Professional, 27 (1), 41–49. https://doi.org/10.1109/mitp.2024.3525458
  19. Cotroneo, P., Hutson, J. (2023). Generative AI tools in art education: Exploring prompt engineering and iterative processes for enhanced creativity. Metaverse, 4 (1), 14. https://doi.org/10.54517/m.v4i1.2164
  20. Hau, K., Hassan, S., Zhou, S. (2025). LLMs in Mobile Apps: Practices, Challenges, and Opportunities. 2025 IEEE/ACM 12th International Conference on Mobile Software Engineering and Systems (MOBILESoft), 3–14. https://doi.org/10.1109/mobilesoft66462.2025.00008
  21. Sharma, R. K., Gupta, V., Grossman, D. (2024). SPML: A DSL for Defending Language Models Against Prompt Attacks. arXiv. https://arxiv.org/abs/2402.11755
  22. Mountantonakis, M., Tzitzikas, Y. (2025). Generating SPARQL Queries over CIDOC-CRM Using a Two-Stage Ontology Path Patterns Method in LLM Prompts. Journal on Computing and Cultural Heritage, 18 (1), 1–20. https://doi.org/10.1145/3708326
  23. Wang, Z., Chakravarthy, A., Munechika, D., Chau, D. H. (2024). Wordflow: Social Prompt Engineering for Large Language Models. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations), 42–50. https://doi.org/10.18653/v1/2024.acl-demos.5
  24. Desmond, M., Brachman, M. (2024). Exploring Prompt Engineering Practices in the Enterprise. arXiv. https://doi.org/10.48550/arXiv.2403.08950
  25. Garcia, M. C., Bondoc, B. C. (2024). Mastering the art of technical writing in it: Making complex things easy to understand in Atate campus. World Journal of Advanced Research and Reviews, 22 (1), 571–579. https://doi.org/10.30574/wjarr.2024.22.1.1020
  26. Developing AI Literacy With People Who Have Low Or No Digital Skills (2024). Good Things Foundation. Available at: https://www.goodthingsfoundation.org/policy-and-research/research-and-evidence/research-2024/ai-literacy
  27. Garg, A., Rajendran, R. (2024). The Impact of Structured Prompt-Driven Generative AI on Learning Data Analysis in Engineering Students. Proceedings of the 16th International Conference on Computer Supported Education, 270–277. https://doi.org/10.5220/0012693000003693
Formalization of text prompts to artificial intelligence systems

Downloads

Published

2025-10-31

How to Cite

Oliinyk, V., Biziuk, A., Deineko, Z., & Chelombitko, V. (2025). Formalization of text prompts to artificial intelligence systems. Eastern-European Journal of Enterprise Technologies, 5(2 (137), 84–97. https://doi.org/10.15587/1729-4061.2025.335473