Development of the method of features learning and training decision rules for the prediction of violation of service level agreement in a cloud-based environment

Authors

DOI:

https://doi.org/10.15587/1729-4061.2017.110073

Keywords:

datacenter, sparse encoding, neural gas, information criterion, machine learning, swarm algorithm

Abstract

We developed the algorithm of learning of the multilayer feature extractor based on ideas and methods of neural gas and sparse encoding, for the problem of prediction of violation of agreement conditions on the service level in a cloud-based environment. Effectiveness of the proposed extractor and autoencoder was compared by the results of physical simulation. It is shown that the proposed extractor requires approximately 1.6 times as few learning samples as the autoencoder for construction of error-free decision rules for learning and test samples. This allows us previously put into effect prediction mechanisms of controlling appropriate cloud-based services.

To build up decision rules, it is proposed to use transformation of the space of primary features using computationally efficient operations of comparison and "excluding OR" for construction in the radial basis of the binary space of secondary features of separate class containers. In this case, for binary feature encoding, it is proposed to use modification of the population algorithm of search for maximum value of the Kullback’s information criterion. Modification implies consideration of compactness of images in the space of secondary features, which allows increasing the gap between distributions of classes and decreasing the negative effect of overfitting.

The authors explored dependence of decision accuracy for training and test samples of the system of prediction of violation of SLA conditions on parameters of the feature extractor and those of the classifier. The extractor configuration, acceptable in terms of accuracy and complexity, was selected. In this case, two time windows, which intersect in time by 50 % and read through 50 features, were used at the entrance of the extractor. The first layer of extractor coding contains 30 basis vectors, and the second layer – 20. Thus, the intralayer pooling and non-linearity were formed by concatenation of sparse codes of each window and by continuation of the resulting code twice as much in order to separate positive and negative code components and to transform the resulting code into the vector of sign-positive features.

Author Biographies

Vyacheslav Moskalenko, Sumy State University Rimskoho-Korsakova str., 2, Sumy, Ukraine, 40007

PhD, Associate Professor

Department of Computer Science

Alyona Moskalenko, Sumy State University Rimskoho-Korsakova str., 2, Sumy, Ukraine, 40007

Assistant

Department of Computer Science

Sergey Pimonenko, Sumy State University Rimskoho-Korsakova str., 2, Sumy, Ukraine, 40007

Postgraduate student

Department of Computer Science

Artem Korobov, Sumy State University Rimskoho-Korsakova str., 2, Sumy, Ukraine, 40007

Postgraduate student

Department of Computer Science

References

  1. Reyhane, A. H., Abdelhakim, H. (2016). SLA Violation Prediction In Cloud Computing: A Machine Learning Perspective. arXiv. Available at: https://arxiv.org/pdf/1611.10338.pdf
  2. Minarolli, D., Mazrekaj, A., Freisleben, B. (2017). Tackling uncertainty in long-term predictions for host overload and underload detection in cloud computing. Journal of Cloud Computing, 6 (1). doi: 10.1186/s13677-017-0074-3
  3. Wajahat, M., Gandhi, A., Karve, A., Kochut, A. (2016). Using machine learning for black-box autoscaling. 2016 Seventh International Green and Sustainable Computing Conference (IGSC). doi: 10.1109/igcc.2016.7892598
  4. Meskini, A., Taher, Y., El gammal, A., Finance, B., Slimani, Y. (2016). Proactive Learning from SLA Violation in Cloud Service based Application. Proceedings of the 6th International Conference on Cloud Computing and Services Science. doi: 10.5220/0005807801860193
  5. Ashraf, A. (2016). Automatic Cloud Resource Scaling Algorithm based on Long Short-Term Memory Recurrent Neural Network. International Journal of Advanced Computer Science and Applications, 7 (12). doi: 10.14569/ijacsa.2016.071236
  6. Gupta, L., Samaka, M., Jain, R., Erbad, A., Bhamare, D., Chan, H. A. (2017). Fault and Performance Management in Multi-Cloud Based NFV using Shallow and Deep Predictive Structures. 7th Workshop on Industrial Internet of Things Communication Networks at The 26th International Conference on Computer Communications and Networks (ICCCN 2017). Vancluver.
  7. Tarsa, S. J., Kumar, A. P., Kung, H. T. (2014). Workload prediction for adaptive power scaling using deep learning. 2014 IEEE International Conference on IC Design & Technology. doi: 10.1109/icicdt.2014.6838580
  8. Flenner, J., Hunter, B. A Deep Non-Negative Matrix Factorization Neural Network. Available at: http://www1.cmc.edu/pages/faculty/BHunter/papers/deepNMF.pdf
  9. Li, Y., Hu, H., Wen, Y., Zhang, J. (2016). Learning-based power prediction for data centre operations via deep neural networks. Proceedings of the 5th International Workshop on Energy Efficient Data Centres – E2DC ’16. doi: 10.1145/2940679.2940685
  10. Zhao, Z., Zhang, X., Fang, Y. (2015). Stacked Multilayer Self-Organizing Map for Background Modeling. IEEE Transactions on Image Processing, 24 (9), 2841–2850. doi: 10.1109/tip.2015.2427519
  11. Chan, T.-H., Jia, K., Gao, S., Lu, J. et. al. (2014). PCANet: A Simple Deep Learning Baseline for Image Classification. arXiv. Available at: https://arxiv.org/pdf/1404.3606.pdf
  12. Labusch, K., Barth, E., Martinetz, T. (2008). Learning Data Representations with Sparse Coding Neural Gas. Proceedings of the European Symposium on Artificial Neural Networks. Bruges, 233–238.
  13. Labusch, K., Barth, E., Martinetz, T. (2009). Sparse Coding Neural Gas: Learning of overcomplete data representations. Neurocomputing, 72 (7-9), 1547–1555. doi: 10.1016/j.neucom.2008.11.027
  14. Moskalenko, V., Pimonenko, S. (2016). Optimizing the parameters of functioning of the system of management of data center it infrastructure. Eastern-European Journal of Enterprise Technologies, 5 (2 (83)), 21–29. doi: 10.15587/1729-4061.2016.79231
  15. Dovbysh, A. S., Moskalenko, V. V., Rizhova, A. S. (2016). Information-Extreme Method for Classification of Observations with Categorical Attributes. Cybernetics and Systems Analysis, 52 (2), 224–231. doi: 10.1007/s10559-016-9818-1
  16. Mosa, A., Paton, N. W. (2016). Optimizing virtual machine placement for energy and SLA in clouds using utility functions. Journal of Cloud Computing, 5 (1). doi: 10.1186/s13677-016-0067-7

Downloads

Published

2017-10-30

How to Cite

Moskalenko, V., Moskalenko, A., Pimonenko, S., & Korobov, A. (2017). Development of the method of features learning and training decision rules for the prediction of violation of service level agreement in a cloud-based environment. Eastern-European Journal of Enterprise Technologies, 5(2 (89), 26–33. https://doi.org/10.15587/1729-4061.2017.110073