The Kumaraswamy- Laplace Distribution

A generalized Laplace distribution using the Kumaraswamy distribution is introduced. Different structural properties of this new distribution are derived, including the moments, and the moment generating function. We discuss maximum likelihood estimation of the model parameters and obtain the observed and expected information matrix. A real data set is used to compare the new model with widely known distributions.


Introduction
For many years, cdf G through exponentiation produces another cdf labeled is always richer and more flexible for data modeling. Mudhokar and Srivastava (1993) successfully applied the exponentiatedWeibull (EW) to analyze bathtub failure data. Gupta and Kundu (2001) concentrated on the study of the exponentiated exponential and found out that it describes situations better than the Weibull or gamma but at least in certain circumstances Exponentiated Exponential might work better than Weibull or gamma. Nassar and Eissa (2003&2004) studied some properties and Bayesian estimates of the exponentiated Weibull (EW) distribution. Nadarajah (2004) applied the exponentiated Gumbel distribution to climate data with improved desired results. Later, the Beta distribution, which is one of the most basic distributions supported on finite range (0, 1), has been used widely in both practical and theoretical generalizations aspects of statistics (see Nadarajah Nassar and Elmasry (2012) and Nassar and Nada (2011). An alternative distribution like the beta distribution, which is easier to work with, is the Kumaraswamy distribution proposed by Kumaraswamy (1980). This distribution has a simple form, where the probability density function (pdf) and the cumulative distribution function (cdf) are given by respectively: where a>0, b>0 are the shape parameters. The Kumaraswamy distribution is like the beta distribution in many ways, for example, Kumaraswamy's densities are also unimodal, uniantimodal, increasing, decreasing or constant depending in the same way as the beta distribution on the values of its parameters. In addition, one can easily show that the Kumaraswamy distribution has the same basic shape properties of the beta distribution. But, because the cdf of the Kumaraswamy distribution has a simple closed form, it has received much attention in simulating hydrological data and related areas. For more detailed properties of the Kumaraswamy distribution, see Jones (2008 In this article we propose a new model, the so-called Kumaraswamy Laplace (KL) distribution.Thearticle is outlined as follows. In Section 2 we introduce the KL distribution and provide plots of density function. We derive the r-th moment and hence obtain the expected value, variance and the moment generating function in Section 3. In Section 4, we deduce the maximum likelihood estimates of the parameters, and derive the information matrix. In Section 5, we provide the Renyi and Shannon entropies. The importance of the (KL) distribution through the analysis of a real data set is illustrated in Section 7.

The Kumaraswamy Laplace distribution
One of the earliest distributions in probability theory was introduced by Pierre-Simon Laplace in 1774. A random variable Z has the Laplace distribution with location parameter and scale parameter >0, if its probability density function (pdf) is given by The cumulative distribution function becomes The cumulative distribution function of the Kumaraswamy Laplace (KL) distribution can be written as The density function corresponding to (5) is given by If we put a=b=1 in Equation (6), introduces to the standard Laplace density function given by Equation (3). If Z follows (6), we can say that ( ). By taking the transformation Equation (6) can be written in a simpler form as The graphs in Figure1 outline different shapes of the density functions for various values of the parameters.

Moment generating function and moments
The most important features of the KL distribution can be studied through moments. We know the r-th moment for a random variable X following the distribution given by Equation (7) can be written as Considering a, b are real non-integer, then we have the power series expansion By applying this expansion for equation (8), we get the general form of the r-th moments about zero as: And where ( ) ( ) , and ( ) . From Equation (12), we can get the first moment (Expected value) and the variance of (KL) distribution, which are given, respectively, as follows Where m and k are defined before in Equations (10) and (11).
The moment generating function can be written from Equation (7) as Where a (1+l) +t >0, i-t>-1 and m and k are defined in Equations (10) and (11).

Estimation
Let be a random sample from KL distribution given by Equation (6). Theparameters of the KL distribution are estimated by the maximum likelihood method and we consider ( ) be the unknown parameter vector. The Log-Likelihood function for is given by The associated score function is given by The maximum likelihood estimation (MLE) of is ̂ is obtained by setting the previous equations to zero U ( ̂) 0, and solving them using Mathematica. The normal approximation of the MLE of can be used for constructing approximate confidence intervals and for testing hypotheses on the parameters . Now we derive the Fisher information matrix for interval estimation and hypotheses testing on the model parameters. The 4x4 Fisher information matrix is given by: ,(( ( ) * ( ) Now, we derive the Expected information matrix, whose elements are given by the relation , -; I , j =1, 2, 3, 4.

Renyi and Shannon entropies
The entropy of a random variable is a measure of variation of the uncertainty. In information theory; the Renyi entropy generalizes the Shannon entropy. The entropy is important in ecology, statistics, engineering, and in quantum information. The Renyi entropy of order , where is defined as , ( )-, whereA [ -∫ ( ) , so from (6), Hence, the Renyi entropy reduces to The Shannon entropy is defined by

Application
In this section we fit the Kumaraswamy-Laplace (KL), Beta-Laplace (BL)[see Cordeiro and Lemonte (2011)], and Laplace distribution models to the data in table [1], and compare it with our models. First, we estimate unknown parameters of the models by the maximum likelihood method and then we obtain the values of Akaike Information criterion (AIC) and Bayesian Information Criterion (BIC). A summary of computations is given in table [2]   As we see from the results presented in Table 2, with ̂ ( ̂ ̂ ̂ ) the KL model with the minimumvalues of AIC and BIC gives a better fit than BL model, so the new KL model provides consistently better fit than the other model.