Optimized Artificial Neural network models to time series

: Artificial Neural networks (ANN) are powerful and effective tools in time-series applications. The first aim of this paper is to diagnose better and more efficient ANN models (Back Propagation, Radial Basis Function Neural networks (RBF), and Recurrent neural networks) in solving the linear and nonlinear time-series behavior. The second aim is dealing with finding accurate estimators as the convergence sometimes is stack in the local minima. It is one of the problems that can bias the test of the robustness of the ANN in time series forecasting. To determine the best or the optimal ANN models, forecast Skill (SS) employed to measure the efficiency of the performance of ANN models. The mean square error and the absolute mean square error were also used to measure the accuracy of the estimation for methods used. The important result obtained in this paper is that the optimal neural network was the Backpropagation (BP) and Recurrent neural networks (RNN) to solve time series, whether linear, semilinear, or non-linear. Besides, the result proved that the inefficiency and inaccuracy (failure) of RBF in solving nonlinear time series. However, RBF shows good efficiency in the case of linear or semi-linear time series only. It overcomes the problem of local minimum. The results showed improvements in the modern methods for time series forecasting.


Introduction:
Time series forecasting is a very powerful computational method that allows predicting, future outcomes of a system based on how the system previously acted, it has a wide range of applications in many scientific fields. Using neural networks and machine learning approaches can predict a wide range of different events, both of natural and human origin produced. Artificial neural networks (ANN) have received much attention in recent years, and the question of which type of neural networks are better in prediction has yet to be resolved. The ANN contains three-layer: input and output layer, hidden layer between input and output layers, and layers relate through the neurons (weights).
In statistics, forecasting the inputs for neural networks are typically the past observations of the info series and therefore the output is that the future value. Most lag structures are to manage the number of input units of the corresponding neural networks. The artificial neural cell consists of 4 parts are synapse, summation function, activation function, Axon paths. When estimating neural network models, it is possible to end up at a local optimum or not to converge to an optimum at all. We must use techniques to avoid these problems 1 .
Artificial neural networks (ANN) are now widely accepted as a technology offering an alternative way to solve complex and ill-defined problems due to their strong non-linear mapping ability. Backpropagation neural network, radial basis function neural network (RBFNN), general regression neural network (GRNN), and recurrent neural network are some of the most common artificial neural network techniques (RNN).
The accuracy of three different approaches to time series prediction is compared in this study: recurrent neural networks, backpropagation neural networks, and radial basis function neural networks. The vital sorts of ANN are Backpropagation Network (BP), Radial Basis Function Neural networks (RBF), and Recurrent neural networks (RNN). The methods utilized in this paper's methodology are BP, RBF, and RNN. The efficiency of these networks in a linear and nonlinear model is also considered.
Previous studies have primarily focused on studying the difference between artificial neural networks, found that general regression neural network is better than backpropagation and radial basis function neural networks [2][3][4] . Other authors found that no significant difference between BP, RBF, and Elman 2,5,6 . In recent years, several studies have focused on using the hybrid models of artificial neural networks There were no significant differences between these models [7][8][9][10][11] .
All previous studies concluded that all artificial neural network models are good and competitive, but they do not focus and study the ability of these models to avoid optimal local problems, which is one of the most important nonlinear estimation problems. The difference between the present work and previous work is to diagnose successful and efficient artificial neural network models to avoid optimal local problems and to identify models capable of solving both linear and nonlinear time series.
The objective of this paper is to determine whether models of artificial neural networks can avoid local optimum problems in time series forecasting. Also, identified which artificial neural networks are optimal for solving both linear and non-linear time series. on the practical part, it was used two sets of data; the first well-known real data set is that the energy data for Africa petroleum production from 1980 to 2012, represent a linear model and therefore, the second well-known real data set is for the Tunisian Dinar/Euro rate of exchange from January 2009 to February 2014, represent a non-linear model.

Materials and Methods: Back Propagation (BP)
This network includes three-stage (Forward, Back Pass, Update Weights). During the forward propagation phase, the input signal propagates to each hidden level node, and then the activation value of each hidden layer node is determined for that signal. The signal of each of these nodes is then sent to each output layer once and then calculated to make the network response to the sample [8][9][10] . The activation value of each output layer node is calculated for that signal per node in the output layer. Activations are compared with the actual resulting value during the training process to determine the error value of that node The weights are updated based on the error derivative. Fig.1 illustrates the forward propagation of node activations from input nodes to output nodes and the backpropagation of in influence errors from output nodes to input nodes 11,12 Figure1. BP methodology

General Regression Neural network
A general regression neural network is a type of RBF network that focuses on learning in a single pass. A GRNN is made up of RBF neurons in a hidden layer. Normally, the hidden layer contains a large number of neurons that serve as a training example. As the neuron's center and training example are associated, its output indicates how close the input vector is to the training example [13][14][15] . The GRNN architecture is shown in Fig. 2. 5,9 .

Recurrent neural networks
RNN is a type of neural network that uses hidden units to analyze data streams. The output of some applications, such as text processing, speech recognition, and DNA sequences, is depending on previous calculations. RNNs are well suited for the health informatics domain, where enormous amounts of sequential data are available to process as they deal with sequential data [16][17][18] . Fig.3 depicts an RNN model 11,15 .

Time series and Artificial Neural networks
ANN becomes more important in statistical processing and analyses. It's self-learning and training, due to its capacity. The major section of researchers uses shifting theorems supported or lagging statistics by one or more degrees to select the artificial neural input of a statistical network. The following formula chooses the neural network inputs 4,11 .

Forecasting accuracy methods
To measure the accuracy and efficiency of the results of the methods used in this paper. The criteria were adopted (root square error (RMSE), mean absolute percentage (MAPE), and prediction skills (SS) 8 .
Where z t is actual value, is forecast value, e t is an error, e t =z t -̂, and n is a sample size. Forecast skills are an extended representation of prediction error that compares the prediction accuracy of a given prediction model to another reference model, and are widely used in mathematical prediction as metrics for evaluating neural network performance ,15 : Where: O i is observation values and f i is forecast values.
If the SS value is 1, the prediction is perfect, and If SS is 0, negative skill 8,17 .

Results and Discussion:
To diagnose the best types of neural networks for solving any time series, two well-known real data sets for two time series representing the different cases of the time series (nonlinear, linear,  The structure of ANN for two-time series as the following: Input layer: the number of nodes in this layer is one node; represented by the variable (Z t-1 ) based formula 1.
Hidden layer: the best number of nodes in this layer is 10 nodes for the first time series as the observing number of time series is small. There would be 25 nodes for the second time series as the observing number of time series is appropriate. Output layer: This layer has one node, which is represented by the vector (Z t ). The MATLAB highlevel programming language was used for the implementation. The quality performers of BP, ENN, and GRNN for the two-time series under study have been shown in Tab.1 It has shown from the evaluation of the quality of artificial neural network in Fig.6 the following: In the first time series (linear series), a best linear fit that matches the target for all types of artificial neural networks (BP, ENN, and GRNN) Indicating the quality and efficiency of those networks.
In the second time series (nonlinear series), a best linear fit that matches the target for the BP and ENN indicating the quality and efficiency of those networks. Whereas the best linear fit of GRNN is very far from the target that indicates the inefficiency of that artificial neural network (GRNN).
The Result efficiency and accuracy error of three types ANNs for two time series-based formulas 2,5 and 6 had shown in Tab.1  forecast has shown for GRNN in nonlinear time series.

Figure 6. Histogram of SS
Thus, the worst method in its performance according to the efficiency and the accuracy was the GRNN-6.
The comparison results for the two-time series in Tab.1 show that the highest value of the MAPE resulted from the GRNN-6 method, which is congruous with the network assessment results in Tab.1 as well as the SS value. Moreover, the GRNN-3, BP-1, and ENN-2 give the highest value of the RMSE, which indicate that it is poor results according to this case. It does not match the network evaluation results in Table 1 and the SS value. Because the RMSE is affected by the second time-series observations, which are limited to 0.4 and 0.6, the error value is small and near to zero, whereas the MAPE value is affected.

Conclusions:
The results reveal that BP and ENN are better than GRNN. The results also show that inefficiency or inability (failure) of GRNN spatially, RBF generally to solve nonlinear time series and their efficiency to solve linear time series because it did not include (was without) feedback. It was found that the efficiency of artificial neural network models to solve linear time series. The calculation in this work suggests that the optimal types of neural networks for solving linear, semilinear, and nonlinear time series are BP and ENN. Its significance in terms of efficiency and accuracy. The result has proven the superiority of the BP and RNN in general, ENN spatially for avoiding local optimum problems, which is one of the very important problems of non-linear estimation. The results presented here may facilitate improvements in the time series forecasting modern method.