Bayes Estimation Of The Modified Inverse Rayleigh Parameters Under Various Approximation Techniques

In this paper we proposed Bayes estimators for complete sample of the Modified Inverse Rayleigh (MIR) parameters which was introduced by Khan (2014). Different approximation methods with squared error loss function (SELF) have been used to develop the bayes estimators for the unknown parameters. The proposed estimators are compared with the corresponding maximum likelihood estimators by simulation study on the basis of bias and mean square error (MSE). To illustrate the usefulness and goodness of fit of MIR distribution we considered two real data sets.


Introduction
The two parameter MIR distribution is a generalization of the Inverse Rayleigh distribution. This distribution was discussed by Khan (2014) . He also studied its some mathematical properties along with the estimation of its parameters. Khan (2014) also shown that Inverse Bayes Estimation Of The Modified Inverse Rayleigh Parameters Under Various Approximation Techniques Exponential (IE) and Inverse Rayleigh (IR) distributions are the sub-models of the MIR distribution when its parameter changes. MIR distribution is a lifetime model and very useful for analyzing lifetime data. The shape of failure rate of the MIR distribution is upside down bath-tub shape curve.
The probability density function of MIR distribution is: (1) And the probability distribution function of the MIR distribution is: where β > 0 and α > 0 are the scale parameters. The Inverse Rayleigh distribution is used in many situations including the area of statistics, life testing and reliability. For lifetime data the failure rate of the Inverse Rayleigh distribution is increasing and decreasing.
Recently, bayesian study has achieved great attention by most researchers like Soliman et al. (2010), Shrestha and Kumar (2014) . Bayesian study is important in field of statistics, which usually needs the information about prior. In this research, unknown parameters of the distribution are treated as random and data is treated as fixed. In bayesian procedure, a major difficulty is of obtaining the posterior distribution. The posterior density often involves the integration, which is not easily solvable not only for high dimensional complex models as well as for dealing with low dimensional models. In such a situation, Markov Chain Monte Carlo (MCMC) methods are very useful to simulate the deviates from posterior density.
It is to be noted that in different literatures, authors only discussed the statistical properties and classical estimation i.e. maximum likelihood estimates of the Modified Inverse Rayleigh (MIR) distribution. The novality of this paper is that we discussed bayesian estimation for the MIR distribution. We used informative prior i.e. Gamma Prior and Squared Error Loss Function (SELF) has been used for obtaining bayes estimators. It has been seen that bayes estimators of the parameters cannot exist in explicit form. Thus, in order to obtain bayes estimates we used different numerical approximation methods. We used Lindley's Approximation method, Tierney and Kadane's Approximation method and also Markov Chain Monte Carlo (MCMC) method for obtaining bayes estimators. We developed the alogrithm to generate MCMC samples from posterior density function using Gibbs sampling technique. Real life data sets are also considered to propose the usefulness of the entire distribution.
Furthermore, the article is described as follows: In section 2 maximum likelihood estimation of the parameters are discussed. Bayes estimation is described in section 3 and different approximation techniques of bayes estimators are discussed in section 3.1, 3.2 and 3.3. Further, in section 4 we made comparison of the proposed estimators which are obtained by different techniques and in section 5 real data applications are also discussed. At last in section 6 conclusion about the whole research is given.

Maximum Likelihood Estimation
Suppose, take a random sample of size n from MIR distribution which is described in (1). Thus, the likelihood function for the whole sample will be: (Khan, 2014) where, x > 0, β > 0 and α > 0 By differentiating the log of (3) w.r.t parameters and equating them to zero we obtain maximum likelihood estimators of the parameters. Thus, the two normal equations will be: Above normal equations for β and α cannot be solved easily so they can be solved using a method which is Newton-Raphson on R language software.

Bayes Estimation
The bayesian approach for the parameters are widely used for many lifetime models and this procedure has been discussed in detail by many authors like Sindhu et al. (2013), Gupta and Keating (1986), Guure and Ibrahim (2014) . It can be observed that most of the bayes estimators are developed through square error loss function. Square error loss function is symmetrical loss function. This loss function may be defined as: whereδ is the estimator of the parameter δ.
Using the Squared error loss function which is given in (4), posterior mean will be the bayes estimators. In bayesian approach, if information about prior is available then one can select prior distribution. But in the situations when the information is not given then it is not easy to select the prior distribution. In this case we make a choice for a prior that parameters have independent gamma priors i.e. gamma(e, f) and gamma (g, h). When the values of hyperparameters e, f, g and h are assumed to be zero then gamma prior has flexible nature. Thus, the prior for β and α may be considered as: respectively. Here e, f, g and h are hyper-parameters of the prior distributions. Now the joint prior density for β and α will be obtained as: hence, posterior distribution of β and α i.e h(β, α|x) will be obtained by substituting L(x|β, α) and Π(β, α) from (3) and (5) and it is given by: Here, K is a normalizing constant. And when square error loss function has used then the Bayes estimates of the parameters will be the mean of posterior density. Thus, the Bayes estimates of β ana α are given by respectively: (8) Now from the above (7) and (8) we can observe that integral is involved in numerator and denominator of the bayes estimators of the parameters and these expressions cannot easily be entractable. Hence, these estimators of β and α are difficult to find so for solving this type of integrals there are several approximation techniques available in literature. Among those, in our literature we consider T-K approximation (Tierney and Kadanes's), Lindley's approximation method and Markov Chain Monte Carlo (MCMC) approximation method and these methods are used to solve integral problems and a single numerical result is obtained from these techniques.

Bayes Estimation through Lindley's Technique
Lindley's approximation method is used to develop the bayes estimators of the unknown parameters. According to Lye et al. (1993) when there is ratio of integral then the expectation of posterior density can be expressed in the form: sufficiently larger n (9) can be approximated as: where,β andα are MLE's of β and α respectively and and similarly the others L terms.
Now, the L terms will be: And Thus, the approximate bayes estimator of β is then obtained as: Thus, the approximate bayes estimator of α is then obtained as:

Bayes Estimation through Tierney and Kadane's (T-K) Approximation
The above mentioned approximation technique is accurate enough to solve the ratio of integrals but sometimes there is a problem when this method consists of third partial derivatives.
In the case of m parameters, the total derivatives are m(m+1)(m+2) 6 then this approximation technique will be quite complicated so in this case T-K approximation technique can be used which is the alternative of Lindley's approximation. According to Tierney and Kadane (1986), any integral of the form: where, where (β * , α * ) and (β o , α o ) maximize L * (β, α) and L o (β, α) respectively and Σ * and Σ o are the inverse of matrices of second derivatives of L * (β, α) and Taking the log of (6) we obtain the function L o (β, α) as: (14) and thus using the approximation the bayes estimators of β and α using square error loss function (SELF) can be written as: where

Bayes Estimation through Markov Chain Monte Carlo (MCMC) Method
In this section we obtained bayes estimators through Markov Chain Monte Carlo (MCMC) Method. This technique contains two alogrithms, one is Gibbs sampler and the other is Metropolis Hastings. These alogrithms are used to generate samples from posterior density and then bayes estimators are computed. When marginal densities of the parameters does not exist in explicit forms but the conditional densities given all the other parameters exist in nice forms the the Gibbs sampler is applied. But generating samples from full conditional densities is not easily managable, for this reason we consider (M-H) Metropolis Hastings alogrithm. Metropolis is a step which is used to generate samples from full conditional density. More details about MCMC technique are discussed by Gelfand and Smith (1990), Upadhyay and Gupta (2010). Thus, using this concept we generated samples from the posterior density (6), we assume that parameters β and α consists of independent gamma distribution with hyperparameters e, f, g and h respectively. We consider full conditional posterior densities of β and α which can be written as: The Gibbs alogrithm consists of these steps: • Start with an initial value θ o = (β o , α o ) • Use Metropolis-Hasting alogrithm to generate samples from posterior density for β and α • Repeat the above two steps M times to obtain posterior samples • After obtaining the posterior samples, the bayes estimators of β and α under square error loss function are as follows: where, M o is the Markov Chain burn period.

Comparison of the proposed estimators
In this section of the paper we done simulation analysis to compare the performance of different estimation methods which we described in the above sections. We compared the estimates on the base of Bias        1

Conclusion
In this paper, classical and also the bayes estimators of the parameters of the MIR distribution are discussed. Bayes estimators are obtained using different approximation techniques and compared with ML estimates on the base of bias and mean square error. In all cases when sample size increases mean square error of all estimators decrease. Bayes estimators are developed using informative priors as well as non-informative priors. It can be observed from table 1 that when we used informative prior the MSE's of bayes estimates for both parameters using Lindley's approximation is less than as compared to MSE's of maximum likelihood estimates. While MSE's of bayes estimates using T-K approximation and MCMC approximation are less than MSE's of maximum likelihood estimation for parameter α for all sample sizes and MSE's of MCMC bayes estimates for β is less than MSE's of ML estimates when sample size is large. And from table 3 it is observed that when we use non-informative prior, for small sample size, only for parameter α, the bayes estimators using T-K approximation and MCMC method have smaller MSE than that of maximum likelihood estimator. But for large sample size, all three methods for bayes estimator give smaller MSE than MSE's for ML estimators. And from the above two tables we also observed that MSE's of all techniques decrease by increasing sample size. But when we changed the values of the parameters, it is observed that Lindley technique has least Bias and MSE as compared to the other Bayes estimates and also from the ML estimates. These results are shown in table 2 and 4. It is verified that for both given data sets MIR distribution provides better fit than other distributions IR and IE distribution because it has least AIC, BIC and K-S statistic and the large p-value shows that MIR distribution is good fit to both data and the results are given in tables 5 and 7. Furthermore, we also calculated the maximum likelihood estimates and as well as bayes estimates using different approximation techniques and bayes estimates are obtained on the assumption that prior are non-informative and the results are presented in tables 6 and 8. Trace plots in figures 1 and 2 show that MCMC samples are well mixed.