Solving Mixed Volterra - Fredholm Integral Equation (MVFIE) by Designing Neural Network

: In this paper, we focus on designing feed forward neural network (FFNN) for solving Mixed Volterra – Fredholm Integral Equations (MVFIEs) of second kind in 2–dimensions. in our method, we present a multi – layers model consisting of a hidden layer which has five hidden units (neurons) and one linear output unit. Transfer function (Log – sigmoid) and training algorithm (Levenberg – Marquardt) are used as a sigmoid activation of each unit. A comparison between the results of numerical experiment and the analytic solution of some examples has been carried out in order to justify the efficiency and the accuracy of our method.


Introduction:
Integral equation is one of the main branches of modern mathematics that appear in various applied areas including mechanics, physics and engineering …etc.Here, we are concerned with the numerical solution of the following (MVFIEs) that has the formula below: (, ) = (, ) + ∫ ∫ (, , (, . ))  0 , ( , ) ∈  =  × [0, ] … (1) where (, ) is an unknown function that should to be found and (, ) and (, , (,)) are given analytical real valued function defined on I =  × [0, ] and M is a compact subset on R n (n=1,2,3), with convenient norm ‖. ‖ .In literatures, different numerical methods used to solve 2-dimensional (MVFIEs) have been reported.In (1) Babolian et al. applied block pulse function and their operational matrix to solve (MVFIEs) in 2-dimensional spaces.By using Hybrid Legendre Functions, Nemati et al. introduced a numerical method for the (MVFIEs) (2).Shahooth presented a Numerical Solution for Mixed (MVFIEs) of the second kind using Bernstein Polynomial Method (3), this method is used to obtain the system of algebraic equations from the integral equation.In 2015, Ahmadabadi used a Meshless Method for solving (MVFIEs) of Urysohn type on nonrectangular regions numerically (4).

Ibrahim et al. using the New Iterative
Method for solving (MVFIEs) (5).This work is structured as follows: In Section 2 we introduce the definition of Artificial Neural Network (ANN).The structure of a Neural Network presented in Section 3. Section 4 is concerned with Levenberg -Marquardt Algorithm (LM).Section 5 demonstrates conducting our proposed approach to calculate the approximation solution for (MVFIEs), and in Section 6 the proposed method is applied in some examples, to clarify the efficiency and accuracy of this method.

Artificial Neural Network (ANN):
An (ANN) is formed from many artificial neurons joined together depending on particular network architecture.The goal of the neural network is to transform the inputs into significant outputs.In other words, (ANN) is an interconnected system of nodes ('equivalent to neurons of a human brain') by weighted arrows ('equivalent to synapses between neurons').The outcome of (ANN) is altered by changing the arrow's weights.The result of the network for the data fed to the input layer is displayed by the output layer.Dependent variables have been estimated from the input nodes, which represent the independent or predictor variables.
Hristev in (6) characterizes ANN as follows, ) "Its pattern of connections between the neurons"(called its architecture).

Open Access
) "The method by which the connections weight is calculated"(training or learning algorithm).) "Its activation function".

Neural Network Structure (6):
The structure or topology of an artificial neural network means the way of regulation of neuronal computational cell in the network.That is, how the nodes are connected and how the information is transmitted through the network.The architecture can be classified in terms of three aspects (Number of levels or layers, Connection pattern and Information flow).seefig( 1)."The Levenberg-Marquardt algorithm is a variation of Newton's method that designed for minimizing functions that are sums of squares of other nonlinear functions.This is very well suited to neural network training where the performance index is the mean squared error.
In order to optimize LMA performance index, If we assume that F(w) is a sum of squares function we define it as, where  = [ 1  2 ⋯   ]  includes all network's weights.  ,   , ,  and  are the desired value of the  ℎ output and the  ℎ pattern, the actual value of the  ℎ output and the  ℎ pattern, the number of the network output, the number of pattern and the number of the weights, respectively.Equation ( 2 where  ,  and  are identity unit matrix, the learing parameter and jacobian of  out-put error of the neural network with respect to  weights, respectively.At each iteration the  parameter automatically adjusted in order to secure convergence, the calculation of the jacobian matrix  and the inverse of    square matrix of order  ×  at each iteration step are the requirement of LMA.

Description of the Method:
In the current section, we demonstrate conducting our approach to calculate the approximation solution of the ( )) … (6) In our approach, the trial solution   (, , ) employs a fast feed neural network (FFNN)and the parameters  correspond to the weight and biases of the neural architecture , we choose form for the trial function   (, )   (  ,   , ) = (, , (, , )) … ( 7) where "(, , ) is a single -output (FFNN) with parameter  and  input unit fed with the input vectors ( , ).The term  is constructed, since   (, ) satisfy them.This term can be formed by using an (ANN) whose weight and biases are to be adjusted in order to deal with the minimization problem.The Minimized is given by

Numerical Examples:
In this section we report numerical result, we use a multi-layer FFNN having one hidden layer with 5 hidden units (neurons) and one linear output unit.The sigmoid activation of each hidden unit is Logsigmoid and training algorithm (Levenberg -Marquardt) used as a sigmoid activation of each unit, in each example the mean square error (MSE) is used to test the accuracy of obtained solutions.

Example 6.1:
Let the following (MVFIE).9) whose analytic solution (, ) =    2  We applied the present method to solve equation (9).Table (1) shows the analytic, neural result and its error.Table (2) gives the weight, bias, Epoch, time and performance of the designer network.
Using the same present method for solving equation (10).Table (3) shows the analytic, neural result and its error.Table (4) gives the weight, bias, Epoch, time and performance of the designer network.Using our method to solve equation (11).Table (5) shows the analytic, neural result and its error.Table (6) gives the weight, bias, Epoch, time and performance of the designer network.

Conclusion:
In this work, it has been successfully designed feed forward neural network (FFNN) for solving (MVFIEs).This design includes fast and efficient algorithm (LM) with one hidden layer that has 5 neurons and one output layer.From the numerical examples, it can be seen that the design (FFNN) method is accurate and efficient to estimate the numerical solution of these equations, because the errors decrease to smaller values compared with the solution for the same examples solved by the other methods (8,9) .