A Comparative Study on the Double Prior for Reliability Kumaraswamy Distribution with Numerical Solution

This work, deals with Kumaraswamy distribution. Kumaraswamy (1976, 1978) showed well known probability distribution functions such as the normal, beta and log-normal but in (1980) Kumaraswamy developed a more general probability density function for double bounded random processes, which is known as Kumaraswamy’s distribution. Classical maximum likelihood and Bayes methods estimator are used to estimate the unknown shape parameter (b). Reliability function are obtained using symmetric loss functions by using three types of informative priors two single priors and one double prior. In addition, a comparison is made for the performance of these estimators with respect to the numerical solution which are found using expansion method. The results showed that the reliability estimator under Rn and R3 is the best.


Introduction:
The Kumaraswamy distribution is a family of continuous probability distribution defined on (0, 1), which has many similarities to the beta distribution, but it takes advantage of an invertible closed from cumulative distribution function. This distribution was originally proposed by Poondi Kumaraswamy (1980) (1).
where b > 0 and a > 0 are the shape parameters. The corresponding reliability function, R(t), and failure rate function, h(t) at mission time t are given as (2, 3): (3) ... a t 1 The Kumaraswamy distribution (KD) is applicable to many natural phenomena whose outcomes have lower and upper bounds, such as the higight of individuals, scores obtained on a test, atmospheric temperatures, hydrological dated such as daily rain fall, daily stream flow, etc. In this work, the informative priors two single priors and one double prior with entropy loss function are used to find the reliability function. Also, numerical method (polynomial expansion method) (12,13) are used to estimate the reliability function R(t), in this method expanding function R(t) in terms of a set of power function as in (14) to find approximate solution of R(t), and then a comparison between the exact and all estimator using least square errors is given. Let random sample of size n, t = ( t 1 , t 2 , …, t n ) is drawn independently from Kumaraswamy distribution (KD) defined by equation (1). The likelihood function for the given sample is defined as: (4) ..
The MLE of the unknown shape parameter b, denoted by ML b , assuming that the other shape parameter (a) is known yields by taking the derivative of natural log-likelihood function with respect to b and setting it equal to zero as: The MLE of R(t) , based on the invariant property of the MLE is defined as:

Bayes Estimator (BE):(9, 10, 11)
From Bayes' rule the posterior probability density function of unknown parameter b, results by combining the likelihood function ) t b , (a L with the density function of the prior distribution g(b), as: The most widely used prior distribution of the parameter b is the gamma distribution with hyperparameter 'α' and 'β' with probability density function given by (10). (8) ...
The posterior distribution of the unknown parameter b of KD have been obtained by substitute equation (8) in equation (7): By using the transformation y = b(β -T)  b = y / (β -T) and db = dy / (β -T) , we obtain the final formula as: The second prior distribution is the exponential distribution with hyper-parameter 'c' having probability density function given by (10). (10) ...
The posterior distribution of the unknown parameter b of KD has been obtained by combining equation (7) with equation (10) as: By using the transformation y = b(c -T)  b = y / (c -T) and db = dy / (c -T) , we obtain the final formula as: Now the third prior distribution is double (gammaexponential) as the form (11) By the same procedure, we have the posterior distribution of the unknown parameter b of KD which has been obtained by combining equation (7) with equation (12) as:

Bayes Estimators under the Entropy Loss Function:
It is well known that the Bayes estimators depend on the form of the prior distribution and the loss function assumed (15,16,17): We will consider the entropy loss function (ELF) to obtain our Bayes estimator. (14) ...
The value of b that minimizes the risk function is obtained by setting its partial derivative with respect to b equal to zero, that is: Therefore, the Bayes estimators of b based on the ELF is: Similarly, the Bayes estimators of the reliability function R(t) corresponding to

Estimate Reliability Function Using Expansion Method
In this section, the reliability function is estimated using expansion method, in which R(t) is expanded as a set of known function q i (t) (14): are expansion coefficients to be determined and q i (t) are the expansion functions to be chosen, in this work q i (t) = t i-1 is taken.
Here, let the arbitrary points {t 1 , t 2 , …, t m } in the subinterval [t 1 , t n ) where m ≤ n, this leads to: (6)

  
The matrix A contains (m) independent columns.
Finally, solve the above system in equation (22) for coefficients d i 's using Gausselimination which satisfies equation (19) to find the approximate solution of R(t).

Test Examples
In this section, some test examples are present for different values n, a, c, β and α shown in Tables (1-9) to find the best estimated value of the reliability function using least square error.
MSE values for Bayes estimators and numerical solution of the reliability function of Kumaraswamy distribution with n=10 (sample of size), best value of m=10 (number of arbitrary points) and different values of (a), as shown in Tables (1, 2, 3).    Table 3. Example 3 with m= n=10 and a=1.5. MSE values for Bayes estimators and numerical solution of the reliability function of Kumaraswamy distribution with n=25 (sample of size), best value of m=25 (number of arbitrary points) and different values of (a) , as shown in the following Tables (4, 5, 6).   respectively and different values of (a), as shown in the following Tables (7,8,9).

Conclusions:
 1 to 9 Tables, the best estimation of reliability is (R 3 ) at a = 1/2, n=50, β=c=4.3 and α=5.5 as well as approaching to (R n ).  The numerical method gives best estimate from R 1 , R 2 and R 3 for all different values of n, a, c, β and α show in above Tables (1-9).  The sample sizes (n) if increases, the estimation methods approach to the maximum likelihood estimation R(t).