Development of methodology for modeling the interaction of antagonistic agents in cybersecurity systems

Authors

DOI:

https://doi.org/10.15587/1729-4061.2019.164730

Keywords:

cybersecurity, antagonistic agents, modeling methodology, system dynamics, reflective agent, multi-agent systems, cognitive modeling

Abstract

The basic concepts that form the basis of integrated modeling of the behavior of antagonistic agents in cybersecurity systems are identified. It is shown that the emphasis is largely on modeling the behavior of one of the cyber conflict parties only. In the case when the interaction of all parties to the conflict is considered, the approaches used are focused on solving particular problems, or they model a simplified situation.

A methodology for modeling the interaction of antagonistic agents in cybersecurity systems, focused on the use of a multi-model complex with elements of cognitive modeling, is proposed. For this objective, the main components of cyber conflict are highlighted, the models of which must be developed. Modeling the interaction of antagonistic agents is proposed to be implemented as a simulation of situations. The concept of a situation is formulated and its components are presented.

In the proposed methodology, traditional methods and modeling tools are not opposed, but are considered together, thus forming a unified methodological basis for modeling the antagonistic agents’ behavior.

In the proposed multi-model complexes, the individual elements and functions of the entities under study are described using various classes of models at a certain level of detail. Coordinated use of various models allows improving the quality of modeling by compensating for the shortcomings of some models by the advantages of others, in particular, reflecting the dynamics of interaction in system-dynamic and agent-based models, which is difficult in classical models of game theory.

Multi-model complexes allow stating the concept of «virtual modeling». This concept allows simulation using models of various classes. The choice of a class of models should correspond to the goals and objectives of modeling, the nature and structure of the source data.

As a result of research, a methodology is proposed for modeling the interaction of antagonistic agents in cybersecurity systems using methods based on the proposed models of the reflective behavior of antagonistic agents under modern hybrid threats conditions

Author Biographies

Oleksandr Milov, Simon Kuznets Kharkiv National University of Economics Nauky аve., 9-А, Kharkiv, Ukraine, 61166

PhD, Associate Professor

Department of Cyber Security and Information Technology

Alexander Voitko, Institute of Information Technologies National University of Defense of Ukraine named after Ivan Chernyakhovsky Povitroflotskiy ave., 28, Kyiv, Ukraine, 03049

PhD, Head of Research Laboratory

Research Laboratory of Information Security Issues

Department of Information Technology and Information Security Employment

Iryna Husarova, Kharkiv National University of Radio Electronics Nauky аve., 14, Kharkiv, Ukraine, 61166

PhD, Associate Professor

Department of Applied Mathematics

Oleg Domaskin, Odessa National Economic University Preobrazhenska str., 8, Odessa, Ukraine, 65082

PhD

Department of Economic Cybernetics and Information Technologies

Yevheniia Ivanchenko, National Aviation University Kosmonavta Komarova аve., 1, Kyiv, Ukraine, 03058

PhD, Associate Professor

Department of Information Technology Security

Ihor Ivanchenko, National Aviation University Kosmonavta Komarova аve., 1, Kyiv, Ukraine, 03058

PhD

Department of Information Technology Security

Olha Korol, Simon Kuznets Kharkiv National University of Economics Nauky аve., 9-А, Kharkiv, Ukraine, 61166

PhD, Associate Professor

Department of Information Systems

Hryhorii Kots, Simon Kuznets Kharkiv National University of Economics Nauki аve., 9-А, Kharkiv, Ukraine, 61166

PhD, Associate Professor

Department of Cyber Security and Information Technology

Ivan Opirskyy, Lviv Polytechnic National University S. Bandery str., 12, Lviv, Ukraine, 79013

Doctor of Technical Sciences

Department of Information Security

Oleksii Fraze-Frazenko, Odessa State Academy of Technical Regulation and Quality Kovalska str., 15, Odessa, Ukraine, 65020

PhD, Associate Professor

Department of Automated Systems and Cybersecurity

References

  1. Evseev, S. P., Koc, G. P., Korol', O. G. (2015). Analysis of the legal framework for the information security management system of the NSМEP. Eastern-European Journal of Enterprise Technologies, 5 (3 (77)), 48–59. doi: https://doi.org/10.15587/1729-4061.2015.51468
  2. Evseev, S. P., Abdullaev, V. G. (2015). Monitoring algorithm of two-factor authentication method based on рasswindow system. Eastern-European Journal of Enterprise Technologies, 2 (2 (74)), 9–16. doi: https://doi.org/10.15587/1729-4061.2015.38779
  3. Veksler, V. D., Buchler, N., Hoffman, B. E., Cassenti, D. N., Sample, C., Sugrim, S. (2018). Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users. Frontiers in Psychology, 9. doi: https://doi.org/10.3389/fpsyg.2018.00691
  4. Gorodeckiy, V. I., Kotenko, I. V., Karsaev, O. V. (2000). Mnogoagentnaya sistema zashchity informacii v komp'yuternyh setyah: mekhanizmy obucheniya i formirovaniya resheniy dlya obnaruzheniya vtorzheniy. Problemy informatizacii, 2, 67–73.
  5. Yevseiev, S., Korol, O., Kots, H. (2017). Construction of hybrid security systems based on the crypto-code structures and flawed codes. Eastern-European Journal of Enterprise Technologies, 4 (9), 4–21. doi: https://doi.org/10.15587/1729-4061.2017.108461
  6. Kotenko, I. V., Korsaev, O. I. (2001). Ispol'zovanie mnogoagentnyh tekhnologiy dlya kompleksnoy zashchity informacionnyh resursov v komp'yuternyh setyah. Izvestiya Yuzhnogo federal'nogo universiteta. Tekhnicheskie nauki, 4, 38–50.
  7. Veksler, V. D., Buchler, N. Know your enemy: applying cognitive modeling in security domain. Available at: https://pdfs.semanticscholar.org/7da6/5e3f224d4830bf0e7fdeae310fa4f52597ed.pdf
  8. Cassenti, D. N., Veksler, V. D. (2018). Using Cognitive Modeling for Adaptive Automation Triggering. Advances in Intelligent Systems and Computing, 378–390. doi: https://doi.org/10.1007/978-3-319-60591-3_34
  9. Kelley, T., Amon, M. J., Bertenthal, B. I. (2018). Statistical Models for Predicting Threat Detection From Human Behavior. Frontiers in Psychology, 9. doi: https://doi.org/10.3389/fpsyg.2018.00466
  10. Zakaria, C., Curé, O., Salzano, G., Smaïli, K. (2009). Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies. On the Move to Meaningful Internet Systems: OTM 2009, 94–111. doi: https://doi.org/10.1007/978-3-642-05148-7_9
  11. Mitnick, K. D., Simon, W. L. (2002). The Art of Deception: Controlling the Human Element of Security. John Wiley & Sons, 368.
  12. Whitman, M. E., Mattord, H. J. (2007). Principles of Information Security. Boston, MA: Course Technology.
  13. Von Solms, R., van Niekerk, J. (2013). From information security to cyber security. Computers & Security, 38, 97–102. doi: https://doi.org/10.1016/j.cose.2013.04.004
  14. Jones, A., Colwill, C. (2008). Dealing with the malicious insider. Australian Information Security Management Conference.
  15. Colwill, C. (2009). Human factors in information security: The insider threat – Who can you trust these days? Information Security Technical Report, 14 (4), 186–196. doi: https://doi.org/10.1016/j.istr.2010.04.004
  16. Kraemer, S., Carayon, P., Clem, J. (2009). Human and organizational factors in computer and information security: Pathways to vulnerabilities. Computers & Security, 28 (7), 509–520. doi: https://doi.org/10.1016/j.cose.2009.04.006
  17. Bowen, B. M., Devarajan, R., Stolfo, S. (2011). Measuring the human factor of cyber security. 2011 IEEE International Conference on Technologies for Homeland Security (HST). doi: https://doi.org/10.1109/ths.2011.6107876
  18. Alpcan, T., Bazar, T. (2010). Network Security: A Decision and Game-Theoretic Approach. Cambridge University Press. doi: https://doi.org/10.1017/cbo9780511760778
  19. Roy, S., Ellis, C., Shiva, S., Dasgupta, D., Shandilya, V., Wu, Q. (2010). A Survey of Game Theory as Applied to Network Security. 2010 43rd Hawaii International Conference on System Sciences. doi: https://doi.org/10.1109/hicss.2010.35
  20. Kelley, C. M., Hong, K. W., Mayhorn, C. B., Murphy-Hill, E. (2012). Something Smells Phishy: Exploring Definitions, Consequences, and Reactions to Phishing. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 56 (1), 2108–2112. doi: https://doi.org/10.1177/1071181312561447
  21. Hong, K. W., Kelley, C. M., Tembe, R., Murphy-Hill, E., Mayhorn, C. B. (2013). Keeping Up With The Joneses. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57 (1), 1012–1016. doi: https://doi.org/10.1177/1541931213571226
  22. Aggarwal, P., Gonzalez, C., Dutt, V. (2016). Cyber-Security: Role of Deception in Cyber-Attack Detection. Advances in Intelligent Systems and Computing, 85–96. doi: https://doi.org/10.1007/978-3-319-41932-9_8
  23. Jajodia, S., Liu, P., Swarup, V., Wang, C. (Eds.) (2010). Cyber Situational Awareness. Springer. doi: https://doi.org/10.1007/978-1-4419-0140-8
  24. Dutt, V., Ahn, Y.-S., Gonzalez, C. (2013). Cyber Situation Awareness. Human Factors: The Journal of the Human Factors and Ergonomics Society, 55 (3), 605–618. doi: https://doi.org/10.1177/0018720812464045
  25. Finomore, V., Sitz, A., Blair, E., Rahill, K., Champion, M., Funke, G. et. al. (2013). Effects of Cyber Disruption in a Distributed Team Decision Making Task. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57 (1), 394–398. doi: https://doi.org/10.1177/1541931213571085
  26. D’Amico, A., Whitley, K., Tesone, D., O’Brien, B., Roth, E. (2005). Achieving Cyber Defense Situational Awareness: A Cognitive Task Analysis of Information Assurance Analysts. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 49 (3), 229–233. doi: https://doi.org/10.1177/154193120504900304
  27. Knott, B. A., Mancuso, V. F., Bennett, K., Finomore, V., McNeese, M., McKneely, J. A., Beecher, M. (2013). Human Factors in Cyber Warfare. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57 (1), 399–403. doi: https://doi.org/10.1177/1541931213571086
  28. Mancuso, V. F., Greenlee, E. T., Funke, G., Dukes, A., Menke, L., Brown, R., Miller, B. (2015). Augmenting Cyber Defender Performance and Workload through Sonified Displays. Procedia Manufacturing, 3, 5214–5221. doi: https://doi.org/10.1016/j.promfg.2015.07.589
  29. Rassel, S. J., Norvig, P. (2003). Artificial Intelligence. A Modern Approach. Pearson Education Inc., 1408.
  30. Druzhinin, V. V., Kontorov, D. S., Kontorov, M. D. (1989). Vvedenie v teoriyu konflikta. Moscow, 288.
  31. Homer, J., Oliva, R. (2001). Maps and models in system dynamics: a response to Coyle. System Dynamics Review, 17 (4), 347–355. doi: https://doi.org/10.1002/sdr.224
  32. Sterman, J. (2000). Business Dynamics. Systems Thinking and Modeling for a Complex World. Boston: McGraw Hill Higher Education.
  33. Barlas, Y. (1996). Formal aspects of model validity and validation in system dynamics. System Dynamics Review, 12 (3), 183–210. doi: https://doi.org/10.1002/(sici)1099-1727(199623)12:3<183::aid-sdr103>3.3.co;2-w
  34. Luna-Reyes, L. F., Andersen, D. L. (2003). Collecting and analyzing qualitative data for system dynamics: methods and models. System Dynamics Review, 19 (4), 271–296. doi: https://doi.org/10.1002/sdr.280
  35. De Gooyert, V. (2016). Nothing so practical as a good theory; Five ways to use system dynamics for theoretical contributions. 34th International Conference of the System Dynamics Society. Delft. Available at: https://www.systemdynamics.org/assets/conferences/2016/proceed/papers/P1209.pdf
  36. Repenning, N. P. (2002). A Simulation-Based Approach to Understanding the Dynamics of Innovation Implementation. Organization Science, 13 (2), 109–127. doi: https://doi.org/10.1287/orsc.13.2.109.535
  37. Gubko, M. V. (2004). Upravlenie organizacionnymi sistemami s setevym vzaimodeystviem agentov. Chast' 1. Obzor teorii setevyh igr. Avtomatika i telemekhanika, 8, 115–132.
  38. Jackson, M. O. (2003). The Stability and Efficiency of Economic and Social Networks. Advances in Economic Design, 319–361. doi: https://doi.org/10.1007/978-3-662-05611-0_19
  39. Novikov, D. A. (2008). Cognitive games: a linear step-function model. Problemy upravleniya, 3, 14–22.
  40. Wooldridge, M. (2002). An Introduction to Multiagent Systems. John Wiley & Sons, 368.
  41. Wooldridge, M., Jennings, N. R., Kinny, D. (2000). The Gaia Methodology for Agent-Oriented Analysis and Design. Journal of Autonomous Agents and Multi-Agent Systems, 3 (3), 285–312. doi: http://doi.org/10.1023/A:1010071910869
  42. Wooldridge, M., Jennings, N. R. (1995). Intelligent agents: theory and practice. The Knowledge Engineering Review, 10 (02), 115. doi: https://doi.org/10.1017/s0269888900008122

Downloads

Published

2019-04-18

How to Cite

Milov, O., Voitko, A., Husarova, I., Domaskin, O., Ivanchenko, Y., Ivanchenko, I., Korol, O., Kots, H., Opirskyy, I., & Fraze-Frazenko, O. (2019). Development of methodology for modeling the interaction of antagonistic agents in cybersecurity systems. Eastern-European Journal of Enterprise Technologies, 2(9 (98), 56–66. https://doi.org/10.15587/1729-4061.2019.164730

Issue

Section

Information and controlling system