Constructing a method for the conversion of numerical data in order to train the deep neural networks
DOI:
https://doi.org/10.15587/1729-4061.2018.145586Keywords:
convolutional neural networks, deep learning, data conversion, bitmap imagesAbstract
This paper analyzes known types of deep neural networks, the methods of their supervised training, training the networks to suppress noise, as well as methods for encoding data using images. It has been shown that deep neural networks are suitable in order to effectively solve classification problems, in particular for medical and technical diagnosing. Among the deep networks, the convolutional neural networks are promising because of their simple structure and application of common weights, which makes it possible for a network to separate similar features in different parts of images. Training a convolutional network may prove insufficient for some diagnosing tasks, which is why it is advisable to consider modifications to the training method using data encoding and training to suppress noise in order to obtain a better result.
We have proposed a method for training a convolutional neural network using numerical data converted to bitmap images, which would improve the accuracy of a network when solving the problems on classification and which would make it possible to apply the convolutional neural networks and their advantages in image processing by using tabular data as input. In addition, the proposed method requires no additional changes to the structure of the network.
The method consists of four stages – the normalization using a method of min-max, conversion of data into two-dimensional images applying the float or thermometric encoding methods, the generation of additional images with the distortion of input data, and the preliminary training of a deep network.
The constructed method was implemented in software and investigated when solving a number of practical tasks. Results of solving the practical tasks on technical and medical diagnosing have shown the effectiveness of the method at small numbers of the resulting classes and training instances. The method could prove useful when diagnosing a defect at the early stages of its manifestation when the volume of training data is limitedReferences
- Bishop, C. M. (2006). Pattern Recognition and Machine Learning. New York, 749.
- Kukačka, M. (2012). Overview of Deep Neural Networks. WDS 2012: proceedings of 21st Annual Conference of Doctoral Students. Prague, 100–105.
- Goodfellow, I., Bengio, Y., Courville, A. (2016). Deep learning: adaptive computation and machine learning. London, 775.
- Strigl, D., Kofler, K., Podlipnig, S. (2010). Performance and Scalability of GPU-Based Convolutional Neural Networks. 2010 18th Euromicro Conference on Parallel, Distributed and Network-Based Processing. doi: https://doi.org/10.1109/pdp.2010.43
- Zhou, S., Chen, Q., Wang, X. (2010). Discriminative Deep Belief Networks for image classification. 2010 IEEE International Conference on Image Processing. doi: https://doi.org/10.1109/icip.2010.5649922
- Liu, Y., Zhou, S., Chen, Q. (2011). Discriminative deep belief networks for visual data classification. Pattern Recognition, 44 (10-11), 2287–2296. doi: https://doi.org/10.1016/j.patcog.2010.12.012
- Gol'cev, A. D. (2005). Neyronnye seti s ansamblevoy organizaciey. Kyiv: Naukova dumka, 200.
- Singh, M. S., Pondenkandath, V., Zhou, B., Lukowicz, P., Liwickit, M. (2017). Transforming sensor data to the image domain for deep learning – An application to footstep detection. 2017 International Joint Conference on Neural Networks (IJCNN). doi: https://doi.org/10.1109/ijcnn.2017.7966182
- Sane, P., Agrawal, R. (2017). Pixel normalization from numeric data as input to neural networks: For machine learning and image processing. 2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET). doi: https://doi.org/10.1109/wispnet.2017.8300154
- Sozykin, A. V. (2017). An Overview of Methods for Deep Learning in Neural Networks. Bulletin of the South Ural State University. Series "Computational Mathematics and Software Engineering", 6 (3), 28–59. doi: https://doi.org/10.14529/cmse170303
- Zhou, Y., Song, S., Cheung, N.-M. (2017). On classification of distorted images with deep convolutional neural networks. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). doi: https://doi.org/10.1109/icassp.2017.7952349
- Zheng, S., Song, Y., Leung, T., Goodfellow, I. (2016). Improving the Robustness of Deep Neural Networks via Stability Training. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). doi: https://doi.org/10.1109/cvpr.2016.485
- Salamon, J., Bello, J. P. (2017). Deep Convolutional Neural Networks and Data Augmentation for Environmental Sound Classification. IEEE Signal Processing Letters, 24 (3), 279–283. doi: https://doi.org/10.1109/lsp.2017.2657381
- Dataset for Sensorless Drive Diagnosis Data Set. Available at: https://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis
- Breast Cancer Wisconsin (Diagnostic) Data Set. Available at: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
- Ultrasonic flowmeter diagnostics Data Set. Available at: https://archive.ics.uci.edu/ml/datasets/Ultrasonic+flowmeter+diagnostics
- Gyamfi, K. S., Brusey, J., Hunt, A., Gaura, E. (2018). Linear dimensionality reduction for classification via a sequential Bayes error minimisation with an application to flow meter diagnostics. Expert Systems with Applications, 91, 252–262. doi: https://doi.org/10.1016/j.eswa.2017.09.010
- Li, L., Dai, G., Zhang, Y. (2017). A Membership-based Multi-dimension Hierarchical Deep Neural Network Approach for Fault Diagnosis. Proceedings of the 29th International Conference on Software Engineering and Knowledge Engineering. doi: https://doi.org/10.18293/seke2017-074
- Lee, H.-W., Kim, N., Lee, J.-H. (2017). Deep Neural Network Self-training Based on Unsupervised Learning and Dropout. The International Journal of Fuzzy Logic and Intelligent Systems, 17 (1), 1–9. doi: https://doi.org/10.5391/ijfis.2017.17.1.1
- Agarap, A. F. M. (2018). On breast cancer detection. Proceedings of the 2nd International Conference on Machine Learning and Soft Computing – ICMLSC '18. doi: https://doi.org/10.1145/3184066.3184080
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2018 Mykhailo Pryshliak, Sergey Subbotin, Andrii Oliinyk
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.