Breed recognition and estimation of live weight of cattle based on methods of machine learning and computer vision
DOI:
https://doi.org/10.15587/1729-4061.2021.247648Keywords:
image processing, convolutional network, multilayer perceptron, stereopsis, predictive modelAbstract
A method of measuring cattle parameters using neural network methods of image processing was proposed. To this end, several neural network models were used: a convolutional artificial neural network and a multilayer perceptron. The first is used to recognize a cow in a photograph and identify its breed followed by determining its body dimensions using the stereopsis method. The perceptron was used to estimate the cow's weight based on its breed and size information. Mask RCNN (Mask Regions with CNNs) convolutional network was chosen as an artificial neural network.
To clarify information on the physical parameters of animals, a 3D camera (Intel RealSense D435i) was used. Images of cows taken from different angles were used to determine the parameters of their bodies using the photogrammetric method.
The cow body dimensions were determined by analyzing animal images taken with synchronized cameras from different angles. First, a cow was identified in the photograph and its breed was determined using the Mask RCNN convolutional neural network. Next, the animal parameters were determined using the stereopsis method. The resulting breed and size data were fed to a predictive model to determine the estimated weight of the animal.
When modeling, Ayrshire, Holstein, Jersey, Krasnaya Stepnaya breeds were considered as cow breeds to be recognized. The use of a pre-trained network with its subsequent training applying the SGD algorithm and Nvidia GeForce 2080 video card has made it possible to significantly speed up the learning process compared to training in a CPU.
The results obtained confirm the effectiveness of the proposed method in solving practical problems.
References
- Tasdemir, S., Urkmez, A., Inal, S. (2011). Determination of body measurements on the Holstein cows using digital image analysis and estimation of live weight with regression analysis. Computers and Electronics in Agriculture, 76 (2), 189–197. doi: https://doi.org/10.1016/j.compag.2011.02.001
- Celik, S., Eyduran, E., Karadas, K., Tariq, M. M. (2017). Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan. Revista Brasileira de Zootecnia, 46 (11), 863–872. doi: https://doi.org/10.1590/s1806-92902017001100005
- McNitt, J. I. (1983). Livestock Husbandry Techniques. London: Granada publishing company limited, 288.
- Adamczyk, K., Molenda, K., Szarek, J., Skrzyński, G. (2005). Prediction of Bulls’slaughter Value From Growth Data Using Artificial Neural Network. Journal of Central European Agriculture, 6 (2), 133–142. Available at: https://www.academia.edu/22135262/Prediction_of_BullsSlaughter_Value_From_Growth_Data_Using_Artificial_Neural_Network_Przewidywanie_Warto%C5%9Bci_Rze%C5%BAnej_
- Akkol, S., Akilli, A., Cemal, İ. (2017). Comparison of Artificial Neural Network and Multiple Linear Regression for Prediction of Live Weight in Hair Goats. Yüzüncü Yıl Üniversitesi Tarım Bilimleri Dergisi, 27 (1), 21–29. doi: https://doi.org/10.29133/yyutbd.263968
- Khorshidi‐Jalali, M., Mohammadabadi, M. R., Esmailizadeh, A., Barazandeh, A., Babenko, O. I. (2019). Comparison of Artificial Neural Network and Regression Models for Prediction of Body Weight in Raini Cashmere Goat. Iranian Journal of Applied Animal Science, 9 (3), 453–461. Available at: http://ijas.iaurasht.ac.ir/article_667543_108967f61ea69bafe983318670f81ffd.pdf
- Nicolas, F. F. C., Saludes, R. B., Relativo, P. L. P., Saludes, T. A. (2018). Estimating live weight of philippine dairy buffaloes (bubalus bubalis) using digital image analysis. Philipp J Vet Anim Sci, 44 (2), 129–138. Available at: https://www.pjvas.org/index.php/pjvas/article/view/207/183
- Shahinfar, S., Mehrabani-Yeganeh, H., Lucas, C., Kalhor, A., Kazemian, M., Weigel, K. A. (2012). Prediction of Breeding Values for Dairy Cattle Using Artificial Neural Networks and Neuro-Fuzzy Systems. Computational and Mathematical Methods in Medicine, 2012, 1–9. doi: https://doi.org/10.1155/2012/127130
- Ali, M., Eyduran, E., Tariq, M. M., Tirink, C., Abbas, F., Bajwa, M. A. et. al. (2015). Comparison of artificial neural network and decision tree algorithms used for predicting live weight at post weaning period from some biometrical characteristics in Harnai sheep. Pakistan J. Zool., 47 (6), 1579–1585. Available at: http://zsp.com.pk/pdf47/1579-1585%20(10)%20QPJZ-0146-2015%2014-7-15%20REVISEDVERSION_FINAL.pdf
- Mortensen, A. K., Lisouski, P., Ahrendt, P. (2016). Weight prediction of broiler chickens using 3D computer vision. Computers and Electronics in Agriculture, 123, 319–326. doi: https://doi.org/10.1016/j.compag.2016.03.011
- Raja, T. V., Ruhil, A. P., Gandhi, R. S. (2011). Comparison of connectionist and multiple regression approaches for prediction of body weight of goats. Neural Computing and Applications, 21 (1), 119–124. doi: https://doi.org/10.1007/s00521-011-0637-z
- Salawu, E. O., Abdulraheem, M., Shoyombo, A., Adepeju, A., Davies, S., Akinsola, O., Nwagu, B. (2014). Using Artificial Neural Network to Predict Body Weights of Rabbits. Open Journal of Animal Sciences, 04 (04), 182–186. doi: https://doi.org/10.4236/ojas.2014.44023
- Szyndler-Nędza, M., Eckert, R., Blicharski, T., Tyra, M., Prokowski, A. (2016). Prediction of Carcass Meat Percentage in Young Pigs Using Linear Regression Models and Artificial Neural Networks. Annals of Animal Science, 16 (1), 275–286. doi: https://doi.org/10.1515/aoas-2015-0057
- Wang, Y., Yang, W., Winter, P., Walker, L. (2008). Walk-through weighing of pigs using machine vision and an artificial neural network. Biosystems Engineering, 100 (1), 117–125. doi: https://doi.org/10.1016/j.biosystemseng.2007.08.008
- Wu, J., Tillett, R., McFarlane, N., Ju, X., Siebert, J. P., Schofield, P. (2004). Extracting the three-dimensional shape of live pigs using stereo photogrammetry. Computers and Electronics in Agriculture, 44 (3), 203–222. doi: https://doi.org/10.1016/j.compag.2004.05.003
- Wongsriworaphon, A., Arnonkijpanich, B., Pathumnakul, S. (2015). An approach based on digital image analysis to estimate the live weights of pigs in farm environments. Computers and Electronics in Agriculture, 115, 26–33. doi: https://doi.org/10.1016/j.compag.2015.05.004
- Yilmaz, H. M., Yakar, M., Yildiz, F. (2008). Digital Photogrammetry in Obtaining of 3D Model Data of Irregular Small Objects. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Beijing, 125–130. Available at: https://www.isprs.org/proceedings/XXXVII/congress/3b_pdf/23.pdf
- Radwan, H., Qaliouby, H., Elfadl, E. (2020). Classification and prediction of milk yield level for Holstein Friesian cattle using parametric and non-parametric statistical classification models. Journal of Advanced Veterinary and Animal Research, 7 (3), 429. doi: https://doi.org/10.5455/javar.2020.g438
- Tasdemir, S., Ozkan, I. A. (2019). Ann approach for estimation of cow weight depending on photogrammetric body dimensions. International Journal of Engineering and Geosciences. doi: https://doi.org/10.26833/ijeg.427531
- Cooper, M. A. R., Robson, S. (1996). Theory of Close Range Photogrammetry. Close Range Photogrammetry and Machine Vision, 9–51.
- Yilmaz, H. M. (2010). Close range photogrammetry in volume computing. Experimental Techniques, 34 (1), 48–54. doi: https://doi.org/10.1111/j.1747-1567.2009.00476.x
- Yakar, M., Yilmaz, H. (2008). Using in volume computing of digital close range photogrammetry. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Beijing, 119–124. Available at: https://www.isprs.org/proceedings/xxxvii/congress/3b_pdf/22.pdf
- Hartley, R., Zisserman, A. (2004). Multiple View Geometry in Computer Vision. Cambridge University Press. doi: https://doi.org/10.1017/cbo9780511811685
- Bradski, G., Kaehler, A. (2008). Learning OpenCV: Computer Vision with the OpenCV Library. O'Reilly Media, 580.
- LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., Jackel, L. D. (1989). Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1 (4), 541–551. doi: https://doi.org/10.1162/neco.1989.1.4.541
- LeCun, Y., Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The Handbook of Brain Theory and Neural Networks, 255–258.
- Girshick, R. (2015). Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/iccv.2015.169
- Wang, J., Ye, Z. (2018). An improved faster R-CNN approach for robust hand detection and classification in sign language. Tenth International Conference on Digital Image Processing (ICDIP 2018). doi: https://doi.org/10.1117/12.2503080
- He, K., Gkioxari, G., Dollar, P., Girshick, R. (2017). Mask R-CNN. 2017 IEEE International Conference on Computer Vision (ICCV). doi: https://doi.org/10.1109/iccv.2017.322
- Nair, V., Hinton, G. E. (2010). Rectified linear units improve restricted Boltzmann machines. Proceedings of the 27 th International Conference on Machine Learning.
- Zhang, Z. (2016). Derivation of Backpropagation in Convolutional Neural Network (CNN). Available at: https://zzutk.github.io/docs/reports/2016.10%20-%20Derivation%20of%20Backpropagation%20in%20Convolutional%20Neural%20Network%20(CNN).pdf
- Raimi, B. K. (2015). 10 Gradient Descent Optimisation Algorithms + Cheat Sheet. Available at: https://www.kdnuggets.com/2019/06/gradient-descent-algorithms-cheat-sheet.html
- Rudenko, O., Bezsonov, O., Oliinyk, K. (2020). First-Order Optimization (Training) Algorithms in Deep Learning. Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2020). Volume I: Main Conference. Lviv, 921–935. Available at: http://ceur-ws.org/Vol-2604/paper61.pdf
- Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. Available at: https://github.com/matterport/Mask_RCNN
- Image augmentation for machine learning experiments. Available at: https://github.com/aleju/imgaug
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2021 Oleksandr Bezsonov, Oleh Lebediev, Valentyn Lebediev, Yuriy Megel, Dmytro Prochukhan, Oleg Rudenko
This work is licensed under a Creative Commons Attribution 4.0 International License.
The consolidation and conditions for the transfer of copyright (identification of authorship) is carried out in the License Agreement. In particular, the authors reserve the right to the authorship of their manuscript and transfer the first publication of this work to the journal under the terms of the Creative Commons CC BY license. At the same time, they have the right to conclude on their own additional agreements concerning the non-exclusive distribution of the work in the form in which it was published by this journal, but provided that the link to the first publication of the article in this journal is preserved.
A license agreement is a document in which the author warrants that he/she owns all copyright for the work (manuscript, article, etc.).
The authors, signing the License Agreement with TECHNOLOGY CENTER PC, have all rights to the further use of their work, provided that they link to our edition in which the work was published.
According to the terms of the License Agreement, the Publisher TECHNOLOGY CENTER PC does not take away your copyrights and receives permission from the authors to use and dissemination of the publication through the world's scientific resources (own electronic resources, scientometric databases, repositories, libraries, etc.).
In the absence of a signed License Agreement or in the absence of this agreement of identifiers allowing to identify the identity of the author, the editors have no right to work with the manuscript.
It is important to remember that there is another type of agreement between authors and publishers – when copyright is transferred from the authors to the publisher. In this case, the authors lose ownership of their work and may not use it in any way.