DEVELOPING A PROTOTYPE OF FIRE DETECTION AND AUTOMATIC EXTINGUISHER MOBILE ROBOT BASED ON CONVOLUTIONAL NEURAL NETWORK

The object of research is a prototype of fire detection and automatic extinguisher mobile robot based on convolutional neural network. Within the recent few decades, fires are considered as one of the most serious disaster that occurs in many places around the world. The severity of fire incidents causes damages to buildings, infrastructures and properties. Resulting losses of human’s life and costs them a lot of losses. Thus, fire poses a great threat to us significantly; it is extremely dangerous for fire fighters. Fires can be resulted by materials such as rubber and chemical products. Other sources of fire are the short circuits on electrical devices and faults in power circuits. Additionally, overheating and overloading problems can be the cause of fire incidents. All these reasons lead to bad consequences when there is no immediate response to such problems. The advent of computer vision technology has played such a significant role for human life. Artificial intelligence field has improved the efficiency and behaviors of robotics beyond expectations. The interference of artificial intelligence made robotics act intelligently. For this reason, in this paper we presented a mobile robot based on deep learning to detect the fire source and determines its coordinate position then automatically moves toward the target and extinguish fire. Deep learning algorithms are the efficient ones for object detection applications. CNN model is one of the most common deep learning algorithms which have been used in the study for the fire detection. Due to the insufficient amount of datasets and large efforts required to build model from scratch. MobileNet V2 is one of the CNN models that support transfer learning technique. After training the model and testing it on 20 % of the used datasets the classification accuracy achieved up to 98.01 %. The motion repeatability of the robot has been implemented and tested resulting mean error 0.648 cm.


Introduction
Fire disaster is one of the most destructive issues in our life [1].Fire has been gaining momentum and reaching alarming proportions [2].Resulting losses of people life, surrounding environments, buildings, infrastructure, commercial and businesses [3].The wide spread and severity of fire incidents has caused a domino effect in the buildings which becomes such a prominent issue in the recent years [3].Fire disaster poses a great threat to people as well as fire fighters when there is no immediate responding for initial stages of fire [4].Firefighting is also an extremely dangerous mission [5][6][7].And is difficult [6,8].Specially, when it comes to dealing with unknown environments.The goal of the paper is to develop a prototype of automatic mobile robot to detect fire using deep learning for image classification of fire and non-fire classes.Then automatically moves toward fire to extinguish it.The developed system is controlled via main controller PC for image processing and secondary controller installed on mobile robot that has motion control algorithm.The system includes webcam of 1 megapixel to capture the fire image and the PC shows the frames and reference frame ground, temperature sensor is used to check the temperature increment, and Bluetooth module as a means for wireless communication.L298N driver module is used to provide the dc motors with the required current for the mobile robot motion.The robot works in a 3×3 squared meter area.This area must be clean with no obstacles for better operation.
The different methods and ideas previous researchers followed to developed fire extinguishing mobile robot.Some of these papers are.TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 The developed system called AFFMP in the paper [9].The system has flame sensor for fire detection.Additionally, it follows a guiding path to avoid obstacles that hinder robot from movement.The commands of detection are sent to the microcontroller to start extinguishing fire using water pump.
The proposed of autonomous firefighting robot system in the paper [10].The system has been developed to aid firefighting robot that encounters difficulties in detecting fire with an environment filled with smoke.The aim objective is to enhance system stability in real time processing.The proposed system is provided with two stereo thermal infrared (IR) visions for fire detection and a frequency modulated continuous wave (FMCW) for position determination.
The voice controlled intelligent fire extinguisher robot in the paper [11].The system is controlled by voice commands given by human that includes direction movement.The system has flame sensor which will detect fire flame, camera to show video fire on LCD screen, buzzer for alarm warning, CO 2 blower to extinguish it, PIC16F77A microcontroller and IR sensor is used to avoid obstacle by the user when barriers is found in front of robot.
In the firefighting robot in the paper [12] a web server is used for handling the robot from web page and can be able to monitor different parameters from web server.The system used conventional detectors for fire detection.The motion is controlled via android phone.
The autonomous firefighting robot was developed with self-power management in the paper [13].
The mobile robot with sensor fusion fire detection unit was developed in the paper [14].
The proposed system «Q ROB» in the paper entitled development of firefighting robot [15].It is designed to rescue firefighters from dangerous situations that may encounter in narrow places.The robot also has ultrasonic transducer to avoid obstacles.Operator monitors the robot motion via the installed camera and controls the robot to extinguish fire.
The proposed system DTMF centered remotely located fire extinguishing robot in the paper [16].The robot is controlled via mobile phone for controlling movement of robot responding to flame sensor.The user sends commands known as DTMF tone to the robot.The designed embedded system with a decoder converts transmitted signal to binary bits that each command is specialized for particular task.
The firefighting robot based on deep learning for fire detection developed in the paper [17].The system combined Alexnet model and Imagenet for fire detection after training and testing the model.The accuracy achieved up to 98.25 % of classification.The system detects fire by PI camera then sends information to Raspberry pi microprocessor for controlling the mobile robot motion.
Deep learning (DL) is the most common subfield of machine learning algorithms which has the ability to perform well compared with conventional machine learning algorithms [18,19].It is used in this study because it deals with a huge amount of data [18,20].Additio nally, it has good performance and simplicity in processing and analyzing datasets significantly in image classification tasks [21,22].The output results are mapped with the input data resulting a function from input to output [23,24].It is considered as the backbone and the basic field for deal-ing with variety of tasks of machine learning [25,26].Convolutional neural network (CNN) is described as one of the deep learning neural networks that are used for processing the content of images and processing complex tasks using convolution operation [27][28][29].It is the most popular network algorithm compared with the other DL neural networks [30,31].What distinguish CNN are the filters it uses to generate features from a large amount of datasets automatically and learn these features to classify datasets [32,33].In deep learning algorithms, it's necessary to have a huge amount of training datasets to get a good performance model [34].Thus, building trials from scratch becomes traditional way.Additionally, it takes long time to collect data from zero and finally ending with insufficient amount of datasets [35].Therefore, with the aid of transfer learning technique that participates in CNN models with a huge amount of data.This method allows for transferring knowledge into the new generated desired model using optimized updated weights and bias to the learner during training process [36].Transfer learning technique boosted the efficiency and performance to deal with recognition and computer vision based fields [37,38].

Materials and Methods
The object of research is a prototype of fire detection and automatic extinguisher mobile robot based on convolutional neural network.
MobileNet V2 is one of the convolution neural network models that were used in the study due to its good performance and its simplicity.The basic architecture of the model is the inverted residual in which the input and the output are narrow layers named as bottleneck layers where residual connections between these layers as in Fig. 1.The model has 32 convolutional filters and 19 residual bottleneck layers.However, the architecture is different from conventional models, which work on expanding dimensionality in the input.The model applies a bottleneck convolution to the input to change the dimensionality from small dimensions that contains small channels to a larger dimension with a high number of channels, which is done by using linear convolution that retrieves features to a low dimension.The architecture of model strives to minimize latency of smaller scale network so that computer vision applications run well on mobile devices.This is achieved by minimizing the required memory and also the number parameters while maintaining the same accuracy.The architecture of the model with different strides is shown in Fig. 2. The architecture with stride 1 refers that residual connection is not required while the architecture with stride 2 requires residual connection.TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 What distinguishes the module and enhances the efficiency of the model is that the model architecture includes depthwise separable convolution between bottleneck layers rather than a full convolution operator which splits the convolution into two convolutions to reduce the number of times of multiplications.So that takes long time for computation relative to addition operation.Thus, depth wise convolution reduces the computation time and parameters.According to [39] Table 1 compares the performance between MobileNets and and MobileNet V2.It turns out the accuracy on image net drops only 1 % while using significantly less number of parameters from 29.3 million the number of parameters it is down to just 4.2 million.Thus, MobileNet V2 model was used in this study.
The proposed method for the study is divided into three main stages: software design, mechanical design and electrical design.(224,224,3).The images were captured from different angles, directions and different lightning conditions.They were also captured from different distances and various sizes in both indoor and outdoor environments.And most importantly, they were in good quality.The images consist of two classes named fire and non-fire images.The process took 3 days for capturing images.The reason behind using captured images rather than using data science images is that the study will be applied in real world.Thus, datasets required to be as real world as much as possible to predict better results for real time process.
2.1.2.Preprocessing.In this process, the fire detection process will be exposed to noises and influences in external and internal environment.The source of noises such as lighting effect, illumination and many gradations of color between fire color and non-fire image color.The aim of this process is to enhance and improve images by reducing the noises on images by applying filtering and minimizes the False positive predictions.
2.1.3.Feature Extraction.The determination of certain pixels within a fire image pixel.The represented pixels are desired and will be extracted from the image.
2.1.4.Image Segmentation.The process that divides the image into multiple objects of the same interest.It isolates the background from an object in the object image.This causes difficulty to the process.The aim of segmentation process is to minimize the complexity of an image so that it becomes more understandable and simple for analysis.Essentially, it assigns labels into the pixels of an image where the other pixels in the image have common properties then they belong to the same labeled group.Image segmentation is a part of image classification.The image classification identifies the class, which the image belongs to, while image segmentation identifies the class which the elements of an image belongs.2.  Pandas is a python open source library used for analyzing and processing data.
Numpy is a python manipulate multi-dimensional arrays observation for numerical python open source library that basically works with arrays.
Scikit-Learn is a python open source machine learning library that has been developed to support libraries like Numpy, Scipy and Matplotlib.The library includes efficient tools and statistical modeling like clustering, classification and regression.
Seahorn is a well visualization python library used to produce good styles for plotting and graphics in an attractive plot.
Random is a module that is an extension for python features used to generate random numbers.
Argparse is a standard library in python used to automatically integrate the parsing of commend line arguments.

Mechanical design.
The mobile robot structure has been designed using computer aided design and all parts were assembled and Fig. 4 shows are the overall model of mobile robot.The load above carrier board such as electronic boards, water container is assumed to be concentrated on the mid-point of the carrier board.Therefore, the center mass of the robot is assumed to be located at the origin point in the carrier board, the mid-point is denoted as C.
Assumption 2. The mobile robot does not move sideways which implies that the mobile robot does not have a lateral component  X r = 0 this will provide a curved motion called instantaneous center of curvature (ICC).
Assumption 3. The robot moves without slippage, which indicates that for every full revolution ∆∅ = { } 2π each wheel travels a distance equal to its circumference ∆X R = { } 2π .This also means that the four wheels are always at a contact point on the ground.At the Instantaneous Center of Curvature (ICC) all points in a pure rotation field centered at ICC have velocity orthogonal to the distance of radius of ICC.Thus, where R represents the radius of Instantaneous Center of Curvature; ω is the angular velocity of the mobile robot body; V C is the linear velocity at the center point; V L is the net linear velocity of the left wheels; V R is the net linear velocity of the right wheels.To clarify the motion of the mobile robot there are some exceptions of motion can occur only when the model satisfies the conditions of kinematic constraints.This kinematic describe how the mobile robot will move resulting in a specific position and orientation when given a sequence of commands to the wheels ∅ ∅ ∅ ,  i additionally, it also describes the linear and angular velocities the robot moves in.
From the equations (4), ( 5) and ( 6) of the linear velocities of the wheels.By Adding both equations ( 4) and ( 6), then the result is the velocity at the center point which has new formula: By subtracting equation ( 6) from (4) the result is the angular velocity of the robot which has the following formula: The kinematic equation of velocities in the local frame is given by the following vector of three components: Based on kinematic constraints conditions there is no lateral velocity component V Cy = 0.By using the velocity propagation method to obtain the Jacobian matrix by taking the partial differentiation for the three parameters of V with respect to  ∅ r and  ∅ L for equation ( 6): TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 The Jacobian matrix of velocities is expressed in the following matrix: From the property in formula (2).Based on the property, which states that for orthogonal rotation matrices, the inverse of rotation matrix is equal to rotation matrix.
Equations ( 14), ( 15) and ( 16) represent the kinematic equations in terms of angular and linear velocities in the body frame with respect to the global frame.The position and orien tation of the mobile robot current pose in the global frame is obtained by taking the integration from 0 to t.The mobile robot is desired to rotate at 90 degrees using skid steering method, which depends on difference in velocities.For this orientation, the difference in the velocities must be 16 rad/s.This is done by adjusting the duty cycle for the DC motors that is obtained by the pulse width for the time in equation ( 16): 2.2.2.Mobile robot simulation using simscape multibody.This tool is used to simulate models of 3D mechanical systems like robotics, manipulators and many others.It allows to provide modelling with the use of blocks that identifies the mechanical parts such as links, joints, bodies.The aim of this tool is to find the building blocks for the motion of the mechanical systems.The procedures can be performed starting with the design of the mechanical systems using CAD.Then selecting appropriate properties for the model such as material, masses, constraints.Then by using Simscape multibody link to export the file into xml.The xml file is imported from Matlab using the function smimport.The simulation of the mechanical system will be shown.The Simscape was used in the study to ensure that all components of robot such as wheels and body frame will move with respect to the reference frame.The coordinate frames have been attached to each part of the mobile robot before importing the Simscape model.Fig. 6, which shows the building block for the mobile robot motion and Fig. 7, shows the mobile robot in Matlab Simulink.ISSN 2664-9969 2.3.Electronic circuit design.Electronic components considered as an important part in designing Mechatronics projects, for our project we have chosen the electronic parts precisely according to their characteristics and quality of work.It consists of a webcam, an Arduino Uno, DC motors, drivers, Bluetooth module, water pump, power supply and LM35 sensor.All electronic components were connected and simulated and Fig. 8 represents the overall electronic circuit wiring.There is evaluation metrics discussed below.

Confusion matrix.
The confusion matrix is defined as a table that is used to measure how well the classification model.It can perform on a set of test data for which true values are known.There are four main terms which confusion matrix is based on.The terms may be confusing but makes confusion matrix more comprehended true positive (TP), true negative (TN), false positive (FP) and false negative (FN) as shown in Fig. 9.
Accuracy is obtained by the ratio of summation values of correct classification samples to the total number of classes:

Accuracy TP TN TP TN FP FN
Recall is also called sensitivity or true positive rate (TPR) which indicates how the classifier correctly predicted the classes: Precision is an evaluation metric that determines the number of correct positive predicted samples.It obtains the accuracy of minority class.It's obtained by the ratio of number of correct predictions to the all number of positive predictions: F1-Score is an evaluation metric based on the combination of precision and recall evaluations of the model.It takes the harmonic mean of the precision and recall: According to the confusion matrix in Fig. 9, the results of the evaluation metrics were obtained and given in Table 3.The curve shows that both train and test data are very close together.However, the curves are not perfectly TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 smooth, they turned out to have regular shape.Although, there are me thods were used to overcome overfitting and to optimize the accuracy curve better such as dropout function, data augmentation techniques and Adam optimizer.However, the epochs value have been selected once and resulting in this curve.The carrier board of the mobile robot has been designed using and manufactured using Laser Cutting Machine.The board is a MDF type of composite materials, which has been machined, in a desired dimension.Fig. 12 shows the vector file for the mobile robot design.
The electronic circuit including Atmega 328 module, temperature sensor IC, switch and Bluetooth module that were connected correctly using wires to transmit the electrical power and cables to transmit data to and from the microcontroller and then installed to the mobile robot as shown in Fig. 13, 14.
Fig. 15 shows the experimentation of fire detection.This experiment shows that fire detected in different times and different environments at the existing of light and the absence of light.The 2-dimensional coordinate positions.The test was implemented in the following manner: ten coordinate positions were selected in alignment on the floor.The coordinates of each point given as input to the inverse kinematic algorithm.An error was calculated by calculating the absolute distance between the measured resultant value the robot has moved to and the expected value.The measured value was measured from the midpoint of the mobile robot and the reference position.Then, after the test was carried out for all points.The mean error for the robot for the ten test points was 0.648 cm.
The proposed system was implemented for fire detection using pretrained CNN model.As shown in Fig. 8 which represents the confusion matrix for fire detection model.The transfer learning from the pretrained MobileNet V2 model was used to train the model.The total images were 151 images, for the two classes 79 for fire class and 72 for non-fire class.The confusion matrix also shows that two images of fire class were predicted as non-fire, which indicates two false negative predictions.One image of non-fire class was predicted as fire, which indicates one false positive prediction.The number of true positive states were 77 while the number of true negative states were 71.Therefore, they were tested using binary classifier.The accuracy of detection was 98.01 %.The study was designed as a prototype, so a few of datasets were used to avoid wasting time and efforts.The robot was producing shifting distance from real TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 position.Therefore, correction factor was used in the program to adjust the motion precision of the mobile robot and optimize its inverse kinematic.After adjusting precision, the selected test points shown in Table 4 was used to determine the mean error of robot.The result shows that robot perform better repeatability with a mean error of 0.648 cm.

Conclusions
This study presents a prototype of an automatic fire detection mobile robot using pretrained deep learning model as well as an extinguisher unit for fighting fire source.The result shows systems based on computer vision operates more efficiently than systems with conventional sensors.The pretrained CNN model used for training model performs better performance for fire detection systems.The training process takes less time and they are fast in detecting in real time.The CNN model was trained and the performance was evaluated.The frames were considered to convert transformation matrix of the body frame of robot relative to the reference frame of the camera.The system was implemented and achieved all the objectives successfully.It takes some time to make this system much better for performing complex tasks.An ultrasonic transducer should use for the robot to avoid terrain rough surfaces.Smoke sensor should be used to improve the robot's efficiency.Increasing the number of datasets is recommended to improve the accuracy of detection.The base of robot is recommended to be made of metal to withstand heat when extinguishing fire.

2. 1 . 5 .
Training and Hyper-Parameters.The Hyperparameters used when training the model on the same dataset, shown in Table

Dropout 0. 5 2. 1 . 6 .
Software Libraries.Tensorflow is used in the field of machine learning.It includes a thorough, scalable and flexible ecosystem of tools, libraries and community resources to build machine-learning applications.Keras is used by researchers for developing and evaluating deep learning neural network models.TECHNOLOGY AUDIT AND PRODUCTION RESERVES -№ 6/1(68), 2022 ISSN 2664-9969 Matplotlib is python library used for 2D graphical representation and data science visualization.

Fig. 5 .
Fig. 5. Schematic of Mobile Robot in Coordinate Systems

Fig. 10
Fig. 10 shows the validation and training accuracy curves of the model after training the model with epochs of 20.

Fig. 10 .
Fig. 10.Training and validation accuracy of CNN model

Fig. 11
shows the relation between the reductions of Loss function with increment of epoch.The epoch was set to 20.Both train and validation loss curves are very close together.In this case, the learning rate is very low resulting to the reduction of the cost function.

Fig. 11 .
Fig. 11.Training and validation loss of CNN model

Table 1
Comparsion of MobileNet and MobileNet V2 models

Table 2
Hyperparameters during training

Table 3
Evaluation Metrics of CNN Module

Table 4
shows the results of the mobile robot repeatability test.

Table 4
Inverse kinematics test results