Remarks on and Characterizations of 2S-Lindley and 2D-Lindley Distributions Introduced by Chesneau et al. (2020)

Chesneau et al.(2020) considered the distributions of sum and differences of two independent and identically distributed random variables with the common Lindley distribution. They derived, very nicely, the above mentioned distributions and provided certain important mathematical and statistical properties as well as simulations and applications of the new distributions. In this short note, we like to show that the assumption of ”independence” can be replaced with a much weaker assumption of ”sub-independence”. Then we present certain characterizations of the proposed distributions to complete, in someway, their work.


Introduction
To make this short note self-contained, we will copy some parts of our previous work Hamedani(2013) here. We may in some occasions have asked ourselves if there is a concept between "uncorrelatedness" and "independence" of two random variables. It seems that the concept of "sub-independence" is the one: it is much stronger than uncorrelatedness and much weaker than independence. The notion of sub-independence seems important in the sense that under usual assumptions, Khintchine's Law of Large Numbers and Lindeberg-Levy's Central Limit Theorem as well as other important theorems in probability and statistics hold for a sequence of sub-independence (s.i.) random variables. While sub-independence can be substituted for independence in many cases, it is difficult, in general, to find conditions under which the former implies the latter. Even in the case of two discrete identically distributed random variables X and Y, the joint distribution can assume many forms consistent with sub-independence. Limit theorems as well as other well-known results in probability and statistics are often based on the distribution of the sums of independent (and often identically distributed) random variables rather than the joint distribution of the summands. Therefore, the full force of independence of the summands will not be required. In other words, it is the convolution of the marginal distributions which is needed, rather than the joint distribution of the summands which, in the case of independence, is the product of the marginal distributions. The concept of sub-independence is shown to be sufficient to yield the conclusions of these theorems and results. This is precisely the reason for the statement: "why assume independence when you can get by with sub-independence." The concept of sub-independence can help to provide solution for some modeling problems where the variable of interest is the sum of a few components. Examples include household income, the total profit of major firms in an industry, and a regression model Y = g (X) + ε where g (X) and ε are uncorrelated; however, they may not be independent. For example, in Bazargan et al.(2007), the return value of significant wave height (Y ) is modeled by the sum of a cyclic function of random delay D,ĝ (D), and a residual termε. They found that the two components are at least uncorrelated, but not independent and used sub-independence to compute the distribution of the return value. Let X and Y be two random variables with joint and marginal cumulative distribution functions (cdf s) F X,Y , F X and F Y respectively. Then X and Y are said to be independent if and only if or equivalently, if and only if where ϕ X,Y (s, t), ϕ X (s) and ϕ Y (t) , respectively, are the corresponding joint and marginal cf s. Note that (1.1) and (1.2) are also equivalent to The concept of sub-independence, as far as we have gathered, was formally introduced by Durairajan(1979) and developed by Hamedani in the past 40 years, stated as follows: The random variables X and Y with cdf s F X and F Y are sub-independent (s.i.) if the cdf of X + Y is given by or equivalently if and only if The drawback of the concept of sub-independence in comparison with that of independence has been that the former does not have an equivalent definition in the sense of (1.3) , which some believe, to be the natural definition of independence. We found such a definition which is stated below. We shall give the definition for the continuous case (Definition 1.1).
We observe that the half-plane H = {(x, y) : x + y < 0} can be expressed as a countable disjoint union of rectangles: : Ω → R 2 be a continuous random vector and for c ∈ R, let

Remarks on and Characterizations of 2S-Lindley and 2D-Lindley Distributions
To see that (1.6) is equivalent to (1.4), observe that (LHS of (1.6)) where P X , P Y are probability measures on R defined by and P X × P Y is the product measure.
We also observe that (RHS of (1.6)) , which is true since the points in H c are obtained by shifting each point in H over to the right by c 2 units and then up by c 2 units. If X and Y are s.i., then unlike independence, X and αY are not necessarily s.i. for any real α = 1. This demonstrates how weak is the concept of sub-independence in comparison with that of independence. Please observe the following simple example. Example 1.1. Let X and Y have the joint cf given by where β is an appropriate constant. (The characteristic function is the Fourier Transform of probability density function (pdf ), so the corresponding joint pdf is given by where p (x, y) = 6xy − 2x 2 − 2y 2 + 4x 2 y 2 − 2x 3 y − 2xy 3 + 1 ).
Then X and Y are s.i. standard normal random variables, and hence X + Y is normal with mean 0 and variance 2, but X and −Y are not s.i. and consequently X − Y does not have a normal distribution.
The concept of sub-independence defined above can be extended to n (> 2) random variables as follows.

Remarks
i) If the random variables X and Y are sub-independent identically distributed (s.i.i.d.) with the common Lindley distribution with the parameter θ, the characteristic function of X + Y is The cf of X is and since X and Y are s.i., we have ii) If the random variables X and Y are identically distributed (i.d.) with the common Lindley distribution with the parameter θ, and if X and −Y are s.i., the characteristic function of X − Y is The cf of X − Y , under the assumption of s.i. of X and −Y , is iii) For a detailed treatment of the concept of sub-independence, we refer the interested reader to Hamedani(2013).

Characterizations of the 2S-Lindley and 2D-Lindley Distributions
To understand the behavior of the data obtained through a given process, we need to be able to describe this behavior via its approximate probability law. This, however, requires to establish conditions which govern the required probability law. In other words, we need to have certain conditions under which we may be able to recover the probability law of the data. So, the characterization of a distribution is important in applied sciences, where an investigator is vitally interested to find out if their model follows the selected distribution. Therefore, the investigator relies on conditions under which their model would follow a specified distribution. A probability distribution can be characterized in different directions one of which is based on the truncated moments. This type of characterization initiated by Galambos and Kotz(1978) and followed by other authors such as Kotz and Shanbhag(1980), Glänzel et al.(1984), Glänzel(1987), Glänzel and Hamedani(2001) and Kim and Jeon(2013), to name a few. For example, Kim and Jeon(2013) proposed a credibility theory based on the truncation of the loss data to estimate conditional mean loss for a given risk function. It should also be mentioned that characterization results are mathematically challenging and elegant. In this section, we present characterizations of the 2S-Lindley and 2D-Lindley distributions based on the conditional expectation (truncated moments) of certain functions of the random variable.
We will employ Theorem 1 of Glänzel(1987) given in the Appendix A. As shown in Glänzel(1990), this characterization is stable in the sense of weak convergence. Proof. If X has pdf (3.1), then and hence We also have

Remarks on and Characterizations of 2S-Lindley and 2D-Lindley Distributions
Conversely, if ξ is of the above form, then and s (x) = θx.
Corollary 3.1. Suppose X is a continuous random variable. Let q 1 (x) be as in Proposition 3.1. Then X has density (3.1) if and only if there exist functions q 2 and ξ defined in Theorem 1 for which the following first order differential equation holds Corollary 3.2. The differential equation in Corollary 3.1 has the following general solution where D is a constant.
Proof. If X has pdf (3.1), then clearly the differential equation holds. Now, if the differential equation holds, then from which we arrive at A set of functions satisfying the above differential equation is given in Proposition 3.1 with D = 0. Clearly, there are other triplets (q 1 , q 2 , ξ) satisfying the conditions of Theorem 1.
This stability theorem makes sure that the convergence of distribution functions is reflected by corresponding convergence of the functions q 1 , q 2 and ξ, respectively. It guarantees, for instance, the 'convergence' of characterization of the Wald distribution to that of the Lévy-Smirnov distribution if α → ∞.
A further consequence of the stability property of Theorem 1 is the application of this theorem to special tasks in statistical practice such as the estimation of the parameters of discrete distributions. For such purpose, the functions q 1 , q 2 and, specially, ξ should be as simple as possible. Since the function triplet is not uniquely determined, it is often possible to choose ξ as a linear function. Therefore, it is worth analyzing some special cases which helps to find new characterizations reflecting the relationship between individual continuous univariate distributions and appropriate in other areas of statistics.
In some cases, one can take q 1 (x) ≡ 1, which reduces the condition of Theorem 1 to E [q 2 (X) | X ≥ x] = ξ (x) , x ∈ H. We, however, believe that employing three functions q 1 , q 2 and ξ will enhance the domain of applicability of Theorem 1.