Bayesian Life Analysis of the Generalized Chen’s Population Under Progressive Censoring

Chen’s model with bathtub shape provides an appropriate conceptual for the hazard rate of various industrial products and clinical cases. This article deals with the problem of estimating the model parameters, reliability and hazard functions of a three-parameter Chen distribution based on progressively Type-II censored sample. Based on the normality approximation to the asymptotic distribution of the maximum likelihood estimates and log-transformed maximum likelihood estimates, the approximate conﬁdence intervals for the unknown parameters, and any function of them, are constructed. Using independent gamma conjugate priors, the Bayes estimators of the unknown parameters and reliability characteristics are derived under different versions of a symmetric squared error loss functions. However, the Bayes estimators are obtained in a complex form, we have been used Metropolis-Hastings sampler procedure to carry out the Bayes estimates and also to construct the corresponding credible intervals. To assess the performance of the proposed estimators, numerical results using Monte Carlo simulation study are reported. To determine the optimum censoring scheme among different competing censoring plans, some optimality criteria have been considered. A practical example using real-life data set, representing the survival times of head and neck cancer patients, is discussed to demonstrate how the applicability of the proposed methods in real phenomenon.

Putting α = 1 in (1), the CD is obtained as a special case. Chaubey and Zhang (2015) discussed the behavior of the density shape of GCD when λ = 1. The PDF (1) is decreasing density when (α < 1, β < 1), it is unimodal density when (α > 1, β > 1) and it is unimodal or decreasing density when (α < 1, β > 1) and (α > 1, β < 1). Further, the HRF (4) of GCD is increasing shape when (α > 1, β > 1), it is bathtub shape when (α < 1, β < 1), and it is increasing or bathtub shape when (α < 1, β > 1) and (α > 1, β < 1). Depending on the range of the parameters α, β and λ, different shapes of the PDFs and HRFs of the GCD are shown in Figure 1. It shows that the density of the GCD is a right skewed distribution. The HRF plots allows for increasing, decreasing, and bathtub shaped hazard rates. Moreover, the different shapes for PDFs and HRFs of the GCD shows that this distribution is quite flexible for modeling lifetime data. Estimating the unknown parameters and reliability characteristics of the GCD under uncensored data has been discussed by Dey et al. (2017).  Figure 2. Thus, for a given PCS, (n, m, R 1 , . . . , R m ), one observes the failure times in an ordered manner as x R 1:m:n < x R 2:m:n < · · · < x R m:m:n . In this case, the likelihood function of PCS-TII is defined as where C = n(n − R 1 − 1) . . . (n − m−1 i=1 (R i + 1)) and Θ is parameter vector. An excellent book provided by Balakrishnan and Cramer (2014) is recommended for any further details.
Several researchers have done extensive work on statistical inference of the unknown parameters and/or the reliability characteristics of the two-parameter Chen lifetime model, for example see, Wu (2008), Sarhan et al. (2012), Rastogi et al. (2012), Ahmed (2014), Sarhan and Apaloo (2015), Seo et al. (2017). Although quite a bit of work has been done on the progressively censored Chen distribution but we have not come across any work on the generalized Chen model under progressively censored data, that is our best knowledge.
In this paper, our main purpose is to derive the maximum likelihood estimators (MLEs) and Bayes estimators (BEs) with associated confidence intervals (CIs) of the three unknown parameters, as well as some reliability lifetime parameters such as RF and HRF, of the GCD under PCS-TII. Independent conjugate gamma priors of the unknown parameters are considered to develop the BEs relative to squared-error loss (SEL), weighted squared-error loss (WSEL), squared-log-error loss (SLEL) and modified (quadratic) squared-error loss (MSEL) functions. Using observed Fisher information matrix, two different scenarios are provided for constructing the approximate confidence intervals (ACIs) for any function of the unknown parameters. Markov chain Monte Carlo (MCMC) approximation method has been implemented to generate samples from the joint posterior density function in order to approximate the BEs and also to obtain the Bayes credible intervals (BCIs) and highest posterior density (HPD) credible intervals. Various optimality criteria are considered to determine the optimal progressively sampling plan. Extensive numerical comparisons have been made to compare the performance of the classical and Bayesian estimates. Finally, one real-life data set is analyzed to illustrate our proposed estimators. The rest of the article is organized as follows: Maximum likelihood and Bayesian inferential procedures of the unknown parameters and the reliability characteristics are discussed in Sections 2 and 3, respectively. The ACIs for the parameters, reliability and hazard rate functions are given in Section 4. Monte Carlo simulation results are presented in Section 5. In Section 6, optimal censoring plans are investigated. Section 7 deals with analysis of real data set and investigate an optimal censoring scheme. Finally, some concluding remarks are provided in Section 8.

Maximum likelihood estimation
In this section the MLEs of the model parameters α, β and λ as well as the reliability characteristics R(t) and h(t) will be obtained based on progressively Type-II censored data. Suppose that n independent units are taken from a population are placed on a test with the corresponding lifetimes being identically distributed having PDF and CDF as defined in (1) and (2), respectively. For more convenience, we'll use x i instead of x R m:m:n for i = 1, 2, . . . , m in henceforward. Let x i , i = 1, 2, . . . , m, (1 m < n) be a Type-II progressive censored data obtained from GCD(α, β, λ) with pre-fixed censoring scheme R i , i = 1, 2, . . . , m. Substituting (1) and (2) into (5), the proportional likelihood function (5) can be written as Bayesian Life Analysis of the Generalized Chen's Population Under Progressive Censoring where Θ = (α, β, λ) T is parameter vector and ψ(x i ; β, λ) = λ(1 − exp(x β i )), i = 1, 2, . . . , m. Using binomial expansion series where Q τ = (−1) τ Ri τ , i = 1, 2, . . . , m. The corresponding log-likelihood function (·) ∝ log L(·) of the unknown parameters α, β and λ, can be obtained from (7) as Differentiating (8) with respect to α, β and λ, we get their likelihood equations, respectively, as where ψ θ (·) is the first-partial derivative with respect to the unknown parameter θ such as . . , m. From the expressions in (9), it is clear that we have a system of three nonlinear equations that must be solved simultaneously to obtain the MLEsα,β andλ of α, β and λ, respectively. Consequently, a closed form solution ofα,β and λ do not exist and cannot be computed analytically. Therefore, for any given data set, one of the most useful iterative procedures called Newton-Raphson method can be used to solve these equations in order to obtain the desired MLE of any function of the unknown model parameters. Furthermore, once the estimates ofα,β andλ obtained, using invariance property of MLE, then the MLEsR(t) andĥ(t) of R(t) and h(t) as in (3) and (4) for given mission time t, respectively, can be easily derived by replacing α, β and λ by their MLEsα,β andλ, respectively.
Remark: Many several works have been extended and may be obtained as a special cases from the our results, such as: • Setting α = 1; we extend the results of Wu (2008), Rastogi et al. (2012), Ahmed (2014), Sarhan and Apaloo (2015) and Seo et al. (2017) in the case of progressive Type-II censoring.

Bayesian estimation
In this section, the BEs of α, β, λ, R(t) and h(t) of will be developed under PCS-TII data against the SEL, WSEL, SLEL and MSEL functions.

Loss functions
In Bayesian paradigm of estimation, a suitable loss function has to be chosen to achieve the best estimate. The loss incurred can be measured by taking an estimatorδ of the unknown parameter δ. However, there is no specific way in the estimation process to determine which loss function is the best to use. Practically, authors usually use a symmetrical loss functions for their convenience. Further, it is not appropriate to consider a symmetric loss function indiscriminately if the losses are asymmetrical. However, a loss function called symmetric loss function, if the amount of loss assigned by a loss function to positive error is equal to negative error of the same magnitude. The most common loss function for estimation problems and mathematically easiest to work is the SEL function (denoted by l S (δ,δ)), see Martz and Waller (1982), is defined as Using (10), the BEδ S (say) of any function of the unknown parameters δ(α, β, λ) (say), is given bỹ Brown (1968) proposed a loss function for parameter estimation called the SLEL function (denoted by l L (δ,δ)), is defined as The SLEL function is balanced loss function in the sense that l L (δ,δ) → ∞ asδ → 0 or ∞. Further, the SLEL function is convex for (δ/δ) e and concave for otherwise, also its posterior risk has unique minimum with respect toδ. Using (11), the BEδ L (say) of any function of the unknown parameters δ(α, β, λ) (say), is given bỹ It should be noted that the symmetrical loss function assigns equal losses to overestimation and underestimation, hence, it is often used because it does not lead to extensive numerical computation. Rodrigues and Zellner (1994) introduced the WSEL function, and used it to estimating the exponential mean time to failure. The WSEL function (denoted by l W (δ,δ)), is defined as Using (12), the BEδ W (say) of any function of the unknown parameters δ(α, β, λ) (say), is given bỹ The exponentially weighted minimum expected loss function is given by Setting c = 0 in (13), the exponentially weighted minimum expected loss function reduced to minimum expected loss function (denoted by l M (δ,δ)), which is first considered by Tummala and Sathe (1978), is defined as This loss function (14) has been referred as MSEL in many works in literaturee. Using (12), the BEδ M (say) of any function of the unknown parameters δ(α, β, λ) (say) is given bỹ provided the above exception exists, and it is finite.

Bayes procedure
Here, we obtain BEs of the unknown parameters α, β and λ as well as survival time parameters R(t) and h(t) against the SEL, WSEL, SLEL and MSEL functions. We assume that α, β and λ are stochastically independent distributed as a conjugate gamma priori with G α (a 1 , b 1 ), G β (a 2 , b 2 ) and G λ (a 3 , b 3 ), respectively. Hence, the joint prior distribution density of α, β and λ up to proportional becomes where all the hyper-parameters a i and b i for i = 1, 2, 3, are assumed to be known and non-negative. Combining (7) and (15), the joint posterior distribution of α, β and λ can be written as where κ is the normalizing constant is given by (1 − exp(x β i )), i = 1, 2, . . . , m, From (16) it can be seen that the ratio of multiple integrals given in (16) cannot be obtained in a closed mathematical expression, this is due to the complex form of the likelihood function given in (7). Thus, the BEs cannot be obtained in closed-form but can by calculated numerically. For this reason, we propose to use MCMC integration method to generate samples from the joint posterior density function (16) and use them to compute the BE of α, β, λ, R(t) and h(t), as well as, to construct the associated BCIs and HPD credible intervals. To implement the MCMC methodology, from (16), the full conditional posterior distributions of the unknown parameters α, β and λ can be written up to proportionality, respectively, as and Since, the conditional posterior distributions (17), (18) and (19) of α, β and λ, respectively, cannot be reduced analytically to well-known distributions, then the use of Metropolis-Hastings (M-H) sampler is required for the implementations of MCMC methodology. The M-H technique is the most commonly-used of MCMC techniques to generate samples from posterior distribution. Using normal distribution as a proposal distribution of the conditional posterior distributions, the M-H algorithm can be used to generate random samples from (17), (18) and (19) and in turn to obtain the BEs and corresponding BCIs and HPD credible intervals. To carried out the M-H algorithm procedure, do following steps for sample generation process: Step 1: Start with initial guess α (0) , β (0) and λ (0) .
Step 6: Repeat Steps 2-5 N times to get N draws of α, β, λ, R(t) and h(t) (say ϕ (j) ) as The first simulated varieties, M , of the algorithm may be biased by the initial value, therefore, usually discarded in the beginning of the analysis implementation (burn-in period) to guarantee the convergence and to remove the affection of the selection of initial. Hence, the selected samples ϕ (j) for j = M + 1, . . . , N, for sufficiently large N can be used to develop the Bayesian inferences. Now, using the generated MCMC samples ϕ (j) for j = M + 1, . . . , N , the approximate BEs with corresponding mean squared-errors (MSEs) of a parametric function ϕ(α, β, λ), or any function of them such as R(t) and h(t) based on SEL (10), SLEL (11), WSEL (12) and MSEL (13) functions, are given respectively bỹ where M is burn-in. On the other hand, one can be used the posterior risk instead of the MSE.

Confidence intervals
In this section, we propose to use the asymptotic normality property of the MLEs and of log-transformed MLEs in order to construct associated ACIs of the unknown parameters α, β, λ, R(t) and h(t). Delta method is also used to approximate the estimate of the variance of a function that contains unknown parameters. Further, the MCMC simulated varieties of α, β, λ, R(t) and h(t) are used to construct the associated BCIs and HPD credible intervals

Asymptotic confidence intervals
To construct the 100(1 − γ)% two-sided ACIs for Θ, where Θ = (α, β, λ), the asymptotic normality of MLEs with their variances estimated from the inverse of the observed Fisher information matrix must be obtained. However, the Fisher information matrix is given by the second-partial derivatives of (8) with respect to α, β and λ locally at their MLE, respectively, as Since, the exact mathematical expressions of the expectation (20) are very difficult to get analytically. So, by dropping the expectation operator E and replacing Θ by their MLEsΘ, see Lawless (2003), the approximate variance-covariance (V-C) matrix, I −1 (Θ), for the MLEsΘ is given practically by Since, the natural logarithm likelihood function (8) has a nonlinear equations, hence, the approximate V-C matrix (21) can be obtained by numerical techniques with computer facilities. The Fisher's elements L ij , i, j = 1, 2, 3, of (21) are obtained and reported in Appendix. It is well known that under some mild regularity conditions for the asymptotic properties of the MLEs, the asymptotic normality ofα,β andλ is approximately distributed as multivariate normal with mean Θ and V-C matrix I −1 (Θ), i.e., Θ ∼ N (Θ, I −1 (Θ)), see Lawless (2003).
Now, to construct the two-sided ACIs of the survival characteristics R(t) and h(t), we first need to find the variances of them. Therefore, the delta method (is a general approach used to approximate the variance for a function of unknown parameters) is considered, see Greene (2012). However, the variance estimatesVR (t) andVĥ (t) of R(t) and h(t) can be approximated according to the delta method, respectively aŝ where ∇R(t) and ∇ĥ(t) are, respectively, the gradient (vector of first-partial derivatives) of R(t) and h(t) with respect to α, β and λ obtained atα,β andλ, i.e., and where T denotes the transpose operator. Thus, both asymptotically. These results yield that the 100(1 − γ)% two-sided ACIs of R(t) and h(t) for given distinct value t can be constructed from the NA of the MLEsR(t) andĥ(t), respectively, aŝ where z γ/2 is the percentile of the standard normal distribution with upper probability (γ/2) − th.
Nevertheless, the main drawback of the standard ACI (using NA) to MLE is that, sometimes, it gives negative lower bound for a parameter which takes positive values. In this case, one may replace zero value instead of negative value.
To overcome this inadequate performance of the normal approximation, Meeker and Escobar (2014) developed the use of a log-transformation MLE to construct ACIs for the unknown parameters that take positive values. Also, they showed that the proposed procedure has better coverage probability. Recently, the normal approximation of the logtransformed MLE (NL) in order to construct ACIs has been considered by several authors in literature, for example, see Krishnamoorthy and Lin (2010), Ahmed (2014) and Lee and Cho (2017). Hence, using NL approach, the 100(1−γ)% two-sided ACI for any function of α, β and λ, say ϕ(α, β, λ, R(t), h(t)), is given by whereV i (·), i = 1, 2, 3, 4, 5, are the estimated variance of log(φ i ).

Credible confidence intervals
To construct the two-sided BCIs of model parameters α, β and λ as well as the survival characteristics R(t) and h(t), order the simulated MCMC samples of ϕ (j) after burn-in as Hence, the 100(1 − γ)% two-sided BCIs of ϕ is given by The Bayesian credible intervals are not unique on a posterior density function. For a unimodal distribution, one can choose the shortest interval, (involving the highest probability density values) which is referred as HPD credible intervals. Since the GCD is unimodal, see Dey et al. (2017), the HPD credible intervals for the unknown parameters and reliability characteristics can be constructed accordingly. However, to construct HPD credible interval estimates, we propose to use the method of Chen and Shao (1999). According to this method, the 100(1 − γ)% two-sided HPD credible interval for each unknown parameter in ϕ, is given by where j * is chosen such that Here [x] denotes the largest integer less than (or equal) to x.

Monte Carlo Simulation
To examine the performance of proposed estimators developed in the previous sections, Monte Carlo simulations are performed. Using the algorithm described in Balakrishnan and Sandhu (1995) based on different combinations of n, m and R, we simulated 5,000 PCS-TII samples from the three-parameter GCD when the true value of the model parameters α, β and λ is taken as (α, β, λ) = (0.5, 0.1, 0.1). Also, the actual value of survival parameters R(t) and h(t) at mission time t = 0.1 is taken as R(0.1) = 0.6620241 and h(0.1) = 0.3479404. Three choices of total sample size n such as n = 30(small), 50(moderate) and 80(large) are used. The test is terminated when the number of failed subjects achieves or exceeds a certain value m, where the percentages of failure information (m/n)100 are considered as 30, 60 and 90%. Also, for each n and m, we consider various censoring schemes (CSs) as For each case, average maximum likelihood and MCMC Bayes estimates with associated MSEs of the unknown model parameters α, β and λ as well as the survival characteristics R(t) and h(t) are computed. Further, the average confidence lengths (ACLs) of different classical and Bayesian credible intervals with their coverage percentages (CPs) are also calculated. In Bayes inference procedure, the choice of the hyper-parameter values is the main issue. So if the improper gamma prior is available, i.e., a i = b i = 0, i = 1, 2, 3, then the joint posterior distribution (16) of α, β and λ reduced with proportional to the likelihood function (7). Therefore, if one does not have prior information on the unknown parameters of interest, it is better to use the MLEs rather than the BEs because the BEs are computationally more expensive. Here, we have used two informative priors of α, β and λ, called, prior (1): (a 1 , a 2 , a 3 ) = (0.5, 0.1, 0.1) and b i = 1, i = 1, 2, 3; prior (2): a i = 1, i = 1, 2, 3 10,10). The values of hyper-parameters are chosen in such way that the prior mean become the expected value of the corresponding parameter, see Kundu (2008). Using the M-H sampler algorithm described in Section 3, 12,000 MCMC samples and discard the first 2000 values as 'burn-in' are generated. Hence, based on 10,000 MCMC samples, the average Bayes MCMC estimates and 95% two-sided BCIs and HPD credible intervals are computed. The average estimates with their MSEs of classical and Bayesian estimates of any parametric function of α, β and λ, say ϕ(α, β, λ), are calculated using the following formulas, respectively, aŝ where S is the number of generated sequence data, θ 1 = α, θ 2 = β, θ 3 = λ, θ 4 = R(t) and θ 5 = h(t). The average point estimates with their associated MSEs are computed and reported in Tables 1-5. Moreover, the ACLs with their associated CPs of 95% ACIs (NA and NL), BCIs and HPD credible intervals are calculated and listed in Tables 6-10. All necessary computational algorithms are coded in R statistical programming language software with 'maxLik' package proposed by Henningsen and Toomet (2011). Also, the computations of MCMC BEs were performed using 'CODA' package proposed by Plummer et al. (2006). Recently, these packages were also utilized by Elshahhat and Nassar (2021) and Elshahhat and Abu El Azm (2022).
From Tables 1-10, it can be seen that the MLEs and BEs of unknown parameters and the reliability characteristics of GCD are very good in terms of minimum MSEs. As expect, the MSEs of all estimates decrease as n and m increases. It is also observed that as the failure proportion m/n increases, the point estimates become even better. The approximate MCMC Bayes estimates using gamma informative prior are better as they include prior information than MLEs in terms of their MSEs. In most cases, it is noticed that the Bayes MCMC estimates have performance better under SEL function than other SLEL, WSEL and MSEL functions on the basis of minimum MSEs. Thus, the SEL function is the most appropriate loss function. Because the variance of prior (2) is smaller than prior (1), it can be seen that, the BEs based on prior (2) has perform better than prior (1) in terms of minimum MSEs for all estimates. The ACLs associated with approximate and credible intervals narrow down and corresponding their CPs increase as the effective increase in sample sizes, as expected. The BCI estimates of the unknown parameters and reliability characteristics are better than ACI estimates in terms of their ACLs and CPs, due to they include prior information. Also, it can be noted that the ACLs and CPs for BCIs and HPD credible intervals are very close to each other. Further, it can be seen that the CPs of ACIs are mostly below the specified nominal level while it is close (or above) to the nominal level in the case of BCIs and HPD credible intervals. Moreover, the ACLs of ACIs using NL approach for any unknown parametric function is shorter than the corresponding ACIs using NA approach. Furthermore, the ACLs of credible intervals are narrow down based on prior (2) than prior (1), due to prior (2) has variance smaller than prior (1). In most cases, it is observed that the HPD credible intervals are the best comparing other proposed approaches.
Comparing the CS-I and CS-III, it is clear that the MSEs and RPs associated with MLEs and BEs, respectively, for the unknown parameters and reliability characteristics are greater for the CS-III than CS-I, because the expected duration of the experiments for CS-I is greater than the CS-III. Thus the data obtained by the CS-I would be expected to provide more information about the unknown parameters and reliability characteristics than the data obtained by CS-III. Therefore, we recommend the Bayesian point and interval estimation of the unknown parameters and reliability characteristics of the GCD using M-H algorithm.

Optimal censoring scheme
In practice, it is desirable to choose a 'optimal' censoring scheme from a class of all possible schemes so that it provides the maximum information of the unknown parameters under consideration. In literature, the problem of comparing two different censoring schemes in order to choosing the optimal censoring scheme has received considerable attention has received much interest among various authors, see  The variance optimality criterion is popular used in the case of single-parameter distributions, but, for the multiparameter case, the trace and determinant optimality are used intend to minimize the variance or the determinant of the V-C matrix of estimators under consideration. Thus, Criterion-I: Minimize det(I −1 (Θ)) and Criterion-II: Minimize trace(I −1 (Θ)) are considered as a commonly-used information criteria to determine the optimum PCS-TII plan for a given n, m and R i , i = 1, 2, . . . , m. Regarding to Criterion-I and -II, our goal is minimize the determinant and the trace of V-C matrix, I −1 (Θ), respectively, of the MLEsθ = (θ 1 ,θ 2 , . . . ,θ k ) T . Obviously, the optimal censoring scheme is the one with the smallest value of criteria I and II.

Head-Neck cancer data analysis
In this section, we consider a real-life data set to demonstrate the applicability of the methodologies proposed in the previous sections. This data set, reported by Efron (1988), represents the survival times (in days) of a group of 44 patients suffering from Head and Neck cancer (HNC) disease and treated using a combination of radiotherapy and chemotherapy (RT+CT). The ordered data times are: 12.2, 23.56, 23.74, 25.87, 31.98, 37, 41.35, 47.38, 55.46, 58.36, 63.47, 68.46, 74.47, 78.26, 81.43, 84, 92, 94, 110, 112, 119, 127, 130, 133, 140, 146, 155, 159, 173, 179, 194, 195, 209, 249, 281, 319, 339, 432, 469, 519, 633, 725, 817, 1776. Recently, this data set was also analyzed by several authors, e.g., see Sharma et al. (2015), Sharma (2018) and Vishwakarma et al. (2018). First, we fit the GCD to the complete data set along ten popular lifetime distributions as its competitors, namely: CD, Gompertz distribution (GD), Hjorth distribution (HD), Weibull distribution (WD), generalized Pareto distribution (GPD), exponentiated Pareto distribution (EPD), generalized-exponential distribution (GED), generalized Rayleigh distribution (GRD), generalized half-logistic distribution (GHLD) and Nadarajah-Haghighi distribution (NHD) with their respective PDFs (for x > 0 and α, β, λ > 0), see Table 11. Table 11: Some competing models for the generalized Chen's distribution.   Table 12 shows that the GCD has the smallest values of the fitted goodness criteria as well as highest p-value, hence we decided that it is the best model compared to other fitted models. Also, this result indicates that the GCD is a suitable model to fit HNC real-life data set. In addition, we draw quantile-quantile (Q-Q) plots for all competing models as a graphical method for goodness-of-fit of these distributions, which are shown in Figure 3. A Q-Q plot depicts the points {F −1 ((i − 0.5)/n;θ), x (i) }, i = 1, 2, . . . , n. Figure 3 indicating that the GCD provides a better fit than others. For more fitting illustration, in Figure 4, we have also provided two plots computed at the estimated model parameters of each distribution such as Plot (a) represents the fitted and empirical CDFs; and Plot (b) represents the histogram of the real HNC data set and the fitted PDFs. Because we don't have any prior information is available about the model parameters, the BEs are developed using non-informative priors. Using the MCMC algorithm described in Subsection 3.2, we generate N = 20, 000 MCMC samples and then the first M = 5, 000 iterations have been discarded. Moreover, some important characteristics such as: mean, median, mode, standard deviation (SD), standard error (SE) and skewness (Sk.) for MCMC posterior distributions of the unknown parameters and the reliability characteristics after bun-in; are computed and provided in Table 13. To evaluate the convergence of 15,000 MCMC outputs, trace plots of the posterior distributions are given in Figure 5.   In each MCMC trace plot, the sample mean is displayed with middle horizontal line, further, lower and upper bounds of 95% BCIs and HPD credible intervals are displayed with dotted (· · · ) and dashed (---) horizontal lines, respectively. Figure 5 indicates that the MCMC procedure converges very well and shows that the bounds of 95% BCIs and HPD credible intervals are very close to each other. Moreover, the marginal posterior density estimates of α, β, λ, R(t) and h(t) with their histograms based on 15,000 chain values using the Gaussian kernel are represented in Figure 6. In each histogram plot, the sample mean of any unknown parameter is displayed as vertical dash-dotted line (:). It is evident from the estimates that the generated posteriors of the unknown model parameters α, β and λ are fairly symmetric while the generated posteriors of the survival characteristics R(t) and h(t) are negative and positive quite skewed, respectively.  Using the complete HNC real data set, some PCS-TII samples with m = 14 using five different sampling schemes are generated and reported in Table 14. For brevity, the censoring scheme R = (2, 0, 0, 0, 2) is denoted by R = (2, 0 * 3, 2).
To run the MCMC sampler algorithm, the initial values of the unknown parameters were taken to be their MLEs. Using data sets of Table 14, the MLEs and BEs of the unknown parameters α, β and λ as well as the survival characteristics R(t) and h(t) at distinct mission time t = 10, are computed and listed in Table 15. Also, 95% two-sided ACIs (NA and NL), BCIs and HPD credible intervals are calculated and reported in Table 16. Again, using different censoring schemes and different effective sample sizes such as m = 14, 24 and 34 from the complete HNC data set, the values of criteria I and II are calculated and presented in Table 17. In this table, the best progressive censoring corresponds to the bold values. From Table 17, for both given criteria I and II, it can be seen that the optimum censoring scheme is R = (10 * 3, 0 * 11), R = (5 * 4, 0 * 20), and R = (5 * 2, 0 * 32) for m = 14, 24 and 34, respectively, than other competing censoring schemes.

Concluding remarks
In this paper we have considered both point and interval estimations of the unknown model parameters, reliability function and hazard function of the generalized Chen distribution via maximum likelihood and Bayesian estimation methods when data are collected under Type-II progressive censoring. Since the classical estimators cannot be obtained in closed form, we have suggested to use the Newton-Raphson iterative procedure by maxLik package in order to update the maximum likelihood estimates and associated asymptotic intervals. Similarly, the Bayes estimators base on various types of loss functions cannot be expressed explicitly. Thus, Metropolis-Hastings algorithm has been used to approximate the Bayes estimates and also to construct the associated credible intervals. To demonstrate the use of the proposed estimators, simulation study has been conducted. Simulation results show that the Bayesian method works quite well than its competitor maximum likelihood method. To show the practical utility of proposed methods in real medical life phenomenon and to choose the optimal sampling scheme, we have also analyzed the survival times for Head and Neck cancer patients receiving a combination of radiotherapy and chemotherapy. Some literature works have been extended and may be obtained as a special cases. This work is mainly associated with analysis progressive censoring from our proposed model, and the same inferential methods can be extended for other distribution and censoring schemes also as a future work. We hope that the results and methodology discussed in this paper will be beneficial to data analyst and reliability practitioners.