Bayesian and Classical Estimation for the One Parameter Double Lindley Model

The main motivation of this paper is to show how the different frequentist estimators of the new distribution perform for different sample sizes and different parameter values and to raise a guideline in choosing the best estimation method for the new model. The unknown parameters of the new distribution are estimated using the maximum likelihood method, ordinary least squares method, weighted least squares method, Cramer-Von-Mises method and Bayesian method. The obtained estimators are compared using Markov Chain Monte Carlo simulations and we observed that Bayesian estimators are more efficient compared to other the estimators.


Introduction
The probability density function (PDF) and the cumulative distribution function (CDF) of Lindley (Li) distribution (see Lindley (1958)) are given by Li ( ) ( ) = 2 (1 + ) and Li ( ) ( ) = 1 − 1 + + 1 + (− )| >0, >0 , respectively. The shape parameter is a positive real number and can result in either a unimodal or monotone decreasing (i.e. consistently decreasing) model. The Li model has a thin tail because the it decreases exponentially for large values of . The Li distribution is one way to describe the lifetime of a process or device. It can be also used in a wide variety of fields like engineering, biology and medicine. Ghitany respectively. To this end, we use (1), (2) and (3) to obtain the one-parameter DLi CDF and PDF (for > 0 ) as and DLi ( ) ( ) = 2 (1 + )(1 + ) 2 respectively. The CDF of in (5) can be easily expressed as where , = (−1) 2 ⋅ ! ( + + 3) −1 ( + 3) and ℎ ⋅ , ( ) is PDF of Exp-Li model with positive parameters ⋅ = 1 + + and . The CDF of can be given by integrating (7) as where ⋅ , ( ) is PDF of Exp-Li model with positive parameters ⋅ and . Figure 1 (left panel) displays some plots of the DLi density for different values of , these plots show that the new density can be "unimodal" with different flexible shapes. The HRF (right panel) of the DLi distribution can be "increasing" and "J-shaped". Many useful "unimodal" real-life data sets can be used in modeling and found in Ibrahim (2019 and 2020a,b), Goual et al. (2019) and Ibrahim et al. (2020).
The corresponding survival function to (4) is given by ].
The hazard rate functions (HRF) of becomes

Order statistics and their moments
The PDF of the ℎ order statistic, say : , can be expressed as where (⋅,⋅) is the beta function. Substituting (5) and (6) in Equation (11), we obtain Then, the ℎ moment of : is given by where (1 + + ). and , , Based upon the moments in Equation (12), we can derive explicit expressions for the L-moments of as infinite weighted linear combinations of the means of suitable DLi order statistics. They are linear functions of expected order statistics defined by

Moment of Residual and Reversed Residual Life
The ℎ moment of the residual life, say The ℎ moment of the residual life of is given by We can write Then, The mean residual life (MRL), or the life expectation at age , of can be obtained by setting = 1 in the last equation and it represents the expected additional life length for a unit which is alive at age . The ℎ moment of the reversed residual life, say ( ) = [( − ) ]| ≤ , >0 and =1,2,… Then we have Then, the ℎ moment of the reversed residual life of becomes Then,

BivDLi type via modified FGM Copula
The modified version of the bivariate FGM copula defined as (Rodriguez-Lallena and Ubeda-Flores (2004)

Classical estimation 4.1 Maximum likelihood method
be a random sample from the DLi distribution with parameter . Then, the log-likelihood function, say ℓ = ℓ( ) , is given by where = 1 + + = (1+ ) (− ). Equation (13) can be maximized either directly by using the R (optim function), SAS (PROC NLMIXED) or Ox program (sub-routine MaxBFGS) or by solving the nonlinear likelihood equations obtained by differentiating (13). Note that ML estimate of the cannot be solved analytically so numerical iteration techniques, such as the Newton-Raphson algorithm, are adopted to solve the log-likelihood equation for which (13) is maximized.

Method of ordinary least square and weighted least square estimation
The theory of OLS and WLS was firstly proposed to estimate the parameters of Beta distribution. It is based on the minimization of the sum of the square of differences of theoretical cumulative distribution function and empirical distribution function. Suppose DLi ( ) ( : ) denotes the CDF of DLi model and if 1 < 2 < ⋯ < be the ordered random sample. The OLS estimators (OLSEs) are obtained upon minimizing Using (5) and (14), we have

Method of Cramer-Von-Mises estimation
The CVME of the parameters is based on the theory of minimum distance estimation (MDE). It was first proposed by MacDonald (1971) and justified that the bias of the estimator is smaller than the other MD estimators. So, the CVME of the parameter is obtained by minimizing the following expression w.r.t. the parameter , then we have The, Cramer-Von-Mises estimators (CVME) of the parameters are obtained by solving the following non-linear equation

Bayesian estimation
In this section, we use Bayesian procedures to construct the estimators of the unknown parameters of DLi distribution. There are many situations where maximum likelihood estimator does not converge especially with higher dimension models. In such cases, the use of Bayesian methods is sought. At first sight, Bayesian methods seem to be very complex as the estimators involve intractable integrals. However, the advanced MCMC techniques make possible to apply Bayesian methods even in higher dimension models. Under Bayesian estimation, we update the likelihoods with prior knowledge explore the posterior probabilities of the parameters. Here we assume the gamma priors (GaP) of the parameter of the following forms 1 ( ) ∼ Ga( 1 , 2 ) where Ga ( 1 , 2 ) stands for gamma distribution with shape parameter 1 and scale parameter . It is further assumed that the parameters are to be independently distributed. The joint prior distribution is given by It is not easy to calculate Bayesian estimates through equations (15) so the numerical approximation techniques are needed. Therefore, we propose the use of MCMC techniques namely Gibbs sampler and Metropolis Hastings (MH) algorithm (see Hastings (1970). Since the conditional posteriors of the parameters cannot be obtained in any standard forms, we, therefore used a hybrid MCMC strategy for drawing samples from the joint posterior of the parameter. To implement the Gibbs algorithm, the full conditional posteriors of are given by The simulation algorithm, we followed is given by 1) Provide initial value, say (0) then at th stage, 2) Using MH algorithm, Generate

Conclusions
The main motivation of this paper is to show how the different frequentist estimators of the new distribution perform for different sample sizes and different parameter values and to raise a guideline in choosing the best estimation method for the new model. The unknown parameters of the new distribution are estimated using the maximum likelihood method, ordinary least squares method, weighted least squares method, Cramer-Von-Mises method and Bayesian method. The obtained estimators are compared using Markov Chain Monte Carlo simulations and we observed that Bayesian estimators are more efficient compared to other the estimators. Based on the simulation results, we observe that all the estimates show the property of consistency and the mean squared errors decrease and approaching zero as sample size increase. Also, the mean squared errors of the Bayesian estimators are less as compared to the rest of other estimators for all sample sizes.