A New Compound Lomax Model: Properties, Copulas, Modeling and Risk Analysis Utilizing the Negatively Skewed Insurance Claims Data Research

Analyzing the future values of anticipated claims is essential in order for insurance companies to avoid major losses caused by prospective future claims. This study proposes a novel three-parameter compound Lomax extension. The new density can be "monotonically declining", "symmetric", "bimodal-asymmetric", "asymmetric with right tail", "asymmetric with wide peak" or "asymmetric with left tail". The new hazard rate can take the following shapes: "J-shape", "bathtub (U-shape)", "upside down-increasing", "decreasing-constant", and "upside down-increasing". We use some common copulas, including the Farlie-Gumbel-Morgenstern copula, the Clayton copula, the modified Farlie-Gumbel-Morgenstern copula, Renyi's copula and Ali-Mikhail-Haq copula to present some new bivariate quasi-Poisson generalized Weibull Lomax distributions for the bivariate mathematical modelling. Relevant mathematical properties are determined, including mean waiting time, mean deviation, raw and incomplete moments, residual life moments, and moments of the reversed residual life. Two actual data sets are examined to demonstrate the unique Lomax extension's usefulness. The new model provides the lowest statistic testing based on two real data sets. The risk exposure under insurance claims data is characterized using five important risk indicators: value-at-risk, tail variance, tail-value-at-risk, tail mean-variance, and mean excess loss function. For the new model, these risk indicators are calculated. In accordance with five separate risk indicators, the insurance claims data are employed in risk analysis. We choose to focus on examining these data under five primary risk indicators since they have a straightforward tail to the left and only one peak. All risk indicators under the insurance claims data are addressed for numerical and graphical risk assessment and analysis.


Introduction
Every property/casualty claim procedure uses two independent random variables (RVs): the claim-size RV and the claim-count RV. The first two basic claim RVs can be combined to produce the aggregate-loss RV, which represents the total claim amount generated by the underlying claim procedure. In this study, a unique distribution of claim-size variables known as the quasi-Poisson generalized Weibull Lomax (QPGWL) model is discussed. Several actuaries employed a wide variety of parametric families of continuous distributions to simulate the size of property and casualty insurance claim amounts. Claim-size RVs take only non-negative values. Thus, for all such RVs Pr{ < 0} = 0, i.e., ( ) = 0 for all < 0. The probability density function (PDF) ( ) for a continuous size-of-loss model for which claim size is unbounded (or unlimited) from above takes on positive values over a semi-infinite interval of the form 0 ≤ 1 < < ∞. For positive 2 in this interval, the portion of the distribution defined on the sub-interval ( 2 , ∞) is called the long tail of the distribution. Alternatively, the part of the loss distribution defined on ( 1 , 2 ), extending to the left and bounded below by 0, is called the short tail of the distribution. Clearly, such probability distributions cannot be symmetric.
Due to the actuarial literature, the insurance claim-size data sets frequently have positive skewness. However, this article examines and models a new collection of insurance claims data that is adversely skewed under the QPGWL model and some risk indicators. An actuarial measurement of the potential loss that might happen in the future as a result of a specific action or event is the risk exposure. As part of a review of the business's risk exposure, risks are usually ranked according to their likelihood of occurring in the future multiplied by the potential loss if they did. The insurance firms can differentiate between little and large losses by ranking the likelihood of likely losses in the future. Speculative risks frequently result in losses such as failures to comply with regulations, a decline in brand value, security flaws, and liability issues. Generally, the risk exposure (r(⋅)) can be expressed as is the total loss of risk occurrence and Pr(⋅) refers to the probability of the occurring risk. However, there has been a lot of work done to examine historical insurance data using time series analysis or continuous distributions. Recently, numerous actuaries have represented actual insurance data using continuous distributions, particularly those with large tails.
Real data have been modelled using continuous heavy-tailed probability distributions in a variety of practical domains, including economics, engineering, risk management, dependability, and actuarial sciences. The insurance data sets can be unimodal right-skewed, right-skewed with heavy tails, or left-skewed. In this paper, we show how the flexible continuous heavy-tailed QPGWL distribution can be utilized to represent left-skewed insurance claims data.
The insurance claims data present a variety of challenges despite huge significance. The largest issue with risk analysis and its applicable applications is identifying the quality of the data and calculating the number of incomplete or missing observations; see Hogg and Klugman (1984), Lane (2000), Stein et al. (2014), and Ibragimov and Prokhorov (2017). Although, the real data sets for insurance claims are typically positive and frequently include right tails or heavy right tails, we will deal with negatively skewed insurance claims data. What allowed us to do this is that the new distribution is flexible enough for accommodating and modeling this type of data.
Many studies employed the Lomax and lognormal distributions to model insurance payments data, and more specifically, massive insurance claim payment data. Several scholars, including Resnick (1997), have used the generalized Lomax model. Due of its monotonically decreasing density shape, the Lomax model does not offer a good fit for many actuarial applications when the frequency distributions are hump shaped. So, the lognormal is frequently used to model these data sets in these circumstances. However, this model does not have enough flexibility to deal with negatively skewed actuarial data sets. In this work, we present the QPGWL distribution for the left-skewed insurance claims real data sets to overcome this problem in the old standard models. As will be explained in more details and plots, it is noted that the probability density function (PDF) of the QPGWL model can be "monotonically decreasing", "asymmetric with right tail", "asymmetric with wide peak", "asymmetric with left tail", "symmetric" and "bimodal-asymmetric". All these characteristics motivate the QPGWL distribution to model the insurance claims data and study and analyze risks accordingly. In order to model the real-life data of business failure, econometrics, actuarial science, queueing theory, and internet traffic modelling, Lomax (1954) investigated his continuous heavy-tail probability distribution. In many research papers, the Lomax model is called Pareto type-II (Pa-II) distribution. Special efforts aim to expand the Lomax distribution and its relevant extensions in applied statistics and related fields such as engineering, instance, wealth inequality, income, medicine, biological studies, and reliability. The Lomax model is applied for modeling real data of income and wealth (Harris,1968;Asgharzadeh and Valiollahi, 2011), type-II progressive censored competing risks data (Cramer and Schemiedt, 2011), real data of firm sizes (Corbellini et al., 2007)), reliability analysis, engineering, taxes and economic (Elgohari and Yousof, 2020a), times of failure/survival (Chesneau and Yousof, 2021), among others. Further, many other Lomax extensions can be cited such as the exponentiated Lomax and gamma Lomax (Gupta et al., 1998;Cordeiro et al., 2015), the transmuted Topp-Leone Lomax , Kumaraswamy Lomax (Lemonte and Cordeiro, 2013), Burr-Hatke Lomax , beta Lomax (Lemonte and Cordeiro, 2013), odd log-logistic Lomax (Elgohari and Yousof, 2020a), proportional reversed hazard rate Lomax distribution (Elgohari and Yousof, 2020) and special generalized mixture Lomax . Other important and flexible extensions can be found in Mansour et al. (2020e) and Aboraya et al. (2022).
A random variable has the Lomax distribution if its cumulative distribution function (CDF) is given by where > 0 is the shape parameter. The above CDF of the one-parameter Lomax distribution is a special case from the Burr type XII (BXII) model. Hence, many theoretical details about the Lomax model and its relationship with other related distributions can be found in Burr (1942Burr ( , 1968Burr ( and 1973, Lomax (1954), Burr and Cislak (1968), Harris (1968), Rodriguez (1977), Tadikamalla (1980) and Yadav et al. (2020).
We propose and study a new compound version Lomax (L) distribution using the generalized Weibull Lomax (GWL) model. The CDF of the three-parameter GWL model can be expressed as where ( ) is the CDF of the baseline model, is the parameter vector, and let ( ) = , , ( ). The CDF of the new model has the form , , ( ) = Equation (3) The hazard rate function (HRF) of the QPGWL extension can be obtained from , , ( )/[1 − , , ( )]. Let ∼QPGWL ( , , ) be a RV having PDF (4). Figure 1 (left plot) provides some plots of the QPGWL PDF for selected parameters values. Figure 1 (right plot) gives some plots of the QPGWL HRF. Figure 1 shows that the PDF of the QPGWL model can be "monotonically decreasing", "asymmetric with right tail", "asymmetric with wide peak", "asymmetric with left tail", "symmetric" and "bimodal-asymmetric". Based on Figure 2, the HRF of the QPGWL distribution can be "decreasing-constant", "upside down-constant", "increasing", "J-shape", "bathtub (U-shape)" and "upside down-increasing".
Since presenting a novel QPGWL model did not become a motivated work itself, is necessary to present some strong motivations and practical justifications that highlight the importance, flexibility and applicability of this distribution. These reasons and drivers essentially developed the new PDF elasticity and the associated HRF. Further, the application ability of the new distribution in modeling and analyzing risks in the field of insurance is one of the most important practical issues for introducing it. Five key risk indicators including the value-at-risk, tail-value-at-risk, tail variance, tail mean-variance, and mean excess loss function are also used to describe the risk exposure associated with the left-skewed insurance claims data. These metrics are created for the QPGWL model. The five primary risk indicators are adopted to assess the left-skewed insurance claim data. Another reason for our motivation to compare the new distribution's characteristics with those of the left-skewed insurance claims data is the new distribution's wide range of flexibility. The QPGWL model could be useful in modeling in the following cases: I. The real data sets whose Kernel density is semi-symmetric and bimodal as shown in Figure 3. II. The real data sets that have no extreme observations as shown in Figure 6. III. The asymmetric monotonically increasing hazard rate real data sets as illustrated in Figure 5.
The QPGWL model proved its wide applicability in modeling against common variable Lomax extensions as shown below:

Properties 2.1 Expanding the QPGWL density
We create a useful linear representation for the QPGWL density function in this section. The exponentiated Lx (exp-L) model is adopted to express the updated PDF (3). Using the power series, we expand the quantity ( ) as Then, the PDF (4) can be expressed as where Then, consider the power series Applying (6) to the quantity ( ) in (5), we can write where +1 . Then, Equation (7) can be reduced to Expanding the quantity ( ) in power series, we can write Inserting the previous expression of ( ) into the last equation, the QPGWL density has the form , , ( ) = Applying the well-known generalized binomial expansion to [(1 + ) − ] ( +1) +1 , we have By inserting (10) into Equation (9), the QPGWL density reduces to where is the exponentiated-L (exp-L) PDF with power parameter ⋆ and Equation (11) where ⋆ ( ) is the exp-L CDF with power parameter ⋆ .

Moments
The calculations below involve several special functions, including the complete beta function the incomplete beta function the complete gamma function . the lower incomplete gamma function , and the upper incomplete gamma function, where . Let ⋆ be a RV having the exp-L family with power ⋆ > 0 defined in (11) and be a RV having the QPGWL( , , ) model. Then, the th moment of the RV is

Moment generating function (MGF)
Clearly, the MGF can be derived from Equation (10) as

Residual life (RL) and reversed residual life (RRL)
The ℎ moment of the RL of the RV can be obtained from , ( ) = [( − ) ]| > ∈ℕ or from which can also be written as ,+∞ ( ). Then, On the other hand, the ℎ moment of the RRL is , ( ) = [( − ) ]| ≤ , >0 and ∈ℕ or which can also be expressed as Hence,

BQPGWL type via AMHC
Under the "stronger Lipschitz condition" and following Ali et al. (1978), the joint CDF of the Archimedean AMHC can be expressed as and

The key risk indicators
Probability-based distributions may provide an adequate explanation of risk exposure. The degree of risk exposure is typically expressed as one number, or at the very least a small set of numbers. These risk exposure levels, which are usually referred to as key risk indicators (KRIs), are obviously functions of a particular model. Such KRIs give actuaries and risk managers knowledge about the level of a company's exposure to particular risks. There are many KRIs that can be considered and researched, including value-at-risk (VaR), tail-value-at-risk (TVaR), conditionalvalue-at-risk (CVaR), tail variance (TV), and tail Mean-Variance (TMV), among others. A quantile of the distribution of total losses in particular is the VaR. The VaR indicator can be used to indicate the chance of a bad outcome at a particular probability/confidence level. Actuaries and risk managers usually concentrate on this task.
The risk exposure under insurance claims data was also described using five important risk indicators, including valueat-risk, tail-value-at-risk, tail variance, tail mean-variance, and mean excess loss function. These metrics are developed for the proposed weighted exponential model. In accordance with the five separate risk indicators, the insurance claims data are employed in the risk analysis. We chose to focus on examining the insurance claims data under the five primary risk indicators since it has a straightforward tail to the left and only one peak. We were inspired to provide both a numerical and graphical risk assessment and analysis because the new distribution was flexible enough to model the insurance claims data under some risk indicators. By matching the new distribution's characteristics to those of the insurance claims data, we were further motivated.

VaR indicator
Risk exposure is an inevitable occurrence for any insurance organization. As a result, actuaries created a variety of risk indicators to assess how much a collection of assets might lose. One of the widely used benchmark risk indicators to assess risk exposure is now represented by this indicator numerically. The VaR indicator measures the risk of a prospective loss for the insurance company and calculates how likely a loss is given a certain likelihood. In general, the VaR estimates the amount of capital necessary to guarantee that the business does not officially go insolvent with a specific likelihood.
The level of assurance picked is arbitrary. Therefore, many VaR values may be taken into account for various levels of confidence. It can be a high percentage, like 99.95 percent for the entire company, or it can be a low percentage, like 95 percent, for just one unit or risk class within the insurance company. These various percentages can represent the inter-unit or inter-risk type diversification that exists.

Definition 1: Let denote a loss RV. Then, the VaR of
at the 100 % level, say VaRq ( ; , ) or ( ) , is the 100 % quantile (or percentile) of the distribution of . Then, based on Definition 1, we can simply write for the QPGWL distribution.
is the quantile function. For a one-year time when = 99.5% , the interpretation is that there is only a very small chance (0.5%) that the insurance company will be bankrupted by an adverse outcome over the next year. The quantity VaR ( ; , ) does not satisfy one of the four criteria for coherence (Wirch, 1999).

TVaR risk indicator
The VaR indicator is widely utilized in the management of financial risk over a specified relatively brief time period as a risk assessment. In these situations, both gains and losses are frequently described using the normal distribution. If the distribution of gains (or losses) is limited to the normal distribution, the quantity VaRq(Z) meets all coherence conditions. The data sets for insurance claims, however, are frequently distorted. Using the normal distribution to describe insurance claims is the next step. where (VaRq( ; , , )) is the mean excess loss function evaluated at the 100 % th quantile. So, TVaRq ( ; , ) is larger than its corresponding VaRq ( ; , , ) by the amount of average excess of all losses that exceed the ELq( ; , , ) value of VaRq ( ; , , ). In the insurance literature, TVaRq ( ; , , ) has been developed independently and it is also called the conditional tail expectation (Wirch,1999). It has also been called the tail conditional expectation (TCE) or expected shortfall (ES) (Tasche, 2002; Acerbi and Tasche, 2002).

TV risk indicator
The TV risk indicator, which Furman and Landsman (2006) established, calculates the loss's deviation from the average along a tail. Explicit formulas for the TV risk indicator under the multivariate normal distribution were also developed by Furman and Landsman (2006).

TMV risk indicator
As a metric for the best portfolio choice, Landsman (2010) developed the TMV risk indicator based on the TCE risk indicator and the TV risk indicator.

The maximum likelihood method
The maximum likelihood method is a statistical technique for estimating the parameters of a probability distribution that has been assumed given some observed data. This is accomplished by maximizing a likelihood function to make the observed data as probable as possible given the assumed statistical model. The maximum likelihood estimate is the location in the parameter space where the likelihood function is maximized. Maximum likelihood is a popular approach for making statistical inferences since its rationale is clear and adaptable. The derivative test for figuring out maxima can be used if the likelihood function is differentiable. The ordinary least squares estimator, for example, maximizes the likelihood of the linear regression model, allowing the first-order requirements of the likelihood function to be explicitly solved in some circumstances. However, in the majority of cases, it will be essential to use numerical techniques to determine the probability function's maximum.
We represent a collection of data as a random sample drawn from a joint probability distribution that is unknown and described in terms of a number of factors. Finding the parameters for which the observed data have the highest joint probability is the aim of maximum likelihood estimation. Let 1 , … , be any observed random sample (RS) from the QPGWL model. Then, the log-likelihood function (ℓ , , ) can be derived from and can then be maximized directly using many common packages such as the R software ("optim function") or, in some cases, by solving the system of the nonlinear equations of the likelihood derivations from the differentiating ℓ , , with respect to , , . The score vector components = ℓ , , , = ℓ , , and = ℓ , , can be derived to obtain the nonlinear system = = = 0 and then solving them simultaneously to find the maximum likelihood estimates (MLEs) of , , . This system can only be solved numerically for the complicated models using some common iterative algorithms such as the "Newton-Raphson" algorithms. The qualities of consistency and asymptotic normalcy are satisfied under regularity criteria, as usual. The asymptotic distribution, in particular, is multivariate normal behind the MLEs. To construct confidence intervals (CIs), confidence regions, and various likelihood tests, we can use first-order asymptotic theory.

Applications
In this section, we examine two actual data sets in an effort to limit the new QPGWL model's widespread applicability. The Quantile-Quantile (Q-Q) plots, Total Time in Test (TTT) plots, Non-parametric Kernel Density Estimation (NKDE) plots, and Box Plots are just a few examples of valuable graphical tools that are employed.
The first data set called the aircraft windshield data and represented failure times of 84 aircraft windshield. The second data set also called the aircraft windshield but represents service times of 63 aircraft windshield. Murthy et al. (2004) gave the two actual data. You can find numerous additional helpful symmetric and asymmetric data sets in Aryal  (2021) to find other related applications to real-life data sets. The basic PDF shape is explored using the NKDE tool (see Figure 3). The Q-Q plot is used to determine whether the two real data sets are "normal" (see Figure 4). The TTT tool is adopted to examine the basic HRFs shape ( Figure 5). The "box plot" verifies the outliers (Figure 6).
It can be seen from left panel of Figure 3 that the first data's NKDE is left-skewed with bimodal shape. The right panel of Figure 3 proves that the second data's NKDE is also left-skewed with bimodal shape. The left and right panels of Figures 4 show that there is a "normality" for the two data sets. The HRF of the two genuine data sets is evident in left and right panels of Figure 5 to be "monotonically growing." The left and right panels of Figure 6 prove that there are no extreme values. The fits of the QPGWL are contrasted with those of numerous popular Lomax extensions, including the odd log-logistic Lomax (OLLL), special generalized mixture Lomax (SGML), reduced odd log-logistic Lomax (ROLLL), reduced Burr-Hatke Lomax (RBHL), gamma Lomax (GL), transmuted Topp-Leone Lomax (TTLL), reduced transmuted Topp-Leone Lomax (RTTLL) and beta Lomax (BL). The following goodness-of-fit (GOF) statistics are used for comparing competitive models:
The MLEs and corresponding standard errors (SEs) for the two data sets are provided in Tables 1 and 3, respectively. Results of the four GOF statistic tests for the two data sets are presented in Tables 2 and 4, respectively. For the first data set, Figure 7 shows the fitted CDF, fitted density, Kaplan-Meier Survival (KMS) plot, Probability-Probability (P-P) plot, and estimated HRF (EHRF). For the second data set, Figure 8 displays the FCDF, fitted density, P-P plot, KMS plot, and EHRF.      Figure 8: Fitted CDF and PDF, P-P, KMS plots and EHRF for the second data set.

Risk analysis under insurance claims data
The temporal growth of claims through time for each appropriate exposure (or origin) period is frequently shown in the historical insurance actual data in the form of a triangle presentation. The year the insurance policy was purchased or the time period during which the loss occurred may be regarded as the exposure period. It is obvious that the origin period need not be annual. For instance, it may be monthly or quarterly origin periods. The development time of an origin period is referred to as the claim age or claim lag. Data from separate insurance is frequently combined to represent uniform company lines, division levels, or risks.
We examine the insurance claims payment triangle from a U.K. Motor Non-Comprehensive account in this paper as a practical illustration. We choose to set the origin period from 2007 to 2013 because of convenience (see Charpentier (2014)). The insurance claims payment data frame displays the claims data in the manner in which a database would normally keep it. The origin year, which ranges from 2007 to 2013, the development year, and the incremental payments are all listed in the first column. It's important to note that this data on insurance claims was initially examined using a probability-based distribution.
Again, but for the claim's insurance data, we examine the statistics on insurance claims first. Real data analysis can be carried out visually, quantitatively, or by combining the two. The numerical method as well as several graphical tools, like as the skewness-kurtosis plot (or the Cullen and Frey plot), are taken into consideration when analyzing initial fits of theoretical distributions such the normal, uniform, exponential, logistic, beta, lognormal, and Weibull (see Figure 9). We have left-skewed data with a kurtosis of less than three, as shown in Figure 9.
In light of this, numerous additional graphical techniques are taken into consideration, including the NKDE approach for investigating the initial shape of the insurance claims density (see Figure 10, the top left plot), the Q-Q plot for investigating the "normality" of the current data (see Figure 10, the top right plot), the TTT plot for investigating the initial shape of the empirical HRF (see Figure 10, the bottom left plot), and the "box plot" for identifying the extreme claims (see Figure 10, the bottom right plot).  Figure 10 (bottom left plot) indicates that the HRF for the models to explain the current data should be monotonically increasing. Figure 11 presents the scattergrams for the insurance claims data. Figure 12 (left plot) presents autocorrelation function (the ACF), and Figure 12 (right plot) presents the partial autocorrelation function (the partial ACF) for the insurance claims data. We present the ACF, which can be used to show how the correlation between any two signal values changes as their separation changes ACF. The theoretical ACF does not provide any insight into the frequency content of the process; rather, it is a time domain measure of the stochastic process memory. It provides some information about the distribution of hills and valleys across the surface with Lag = = 1 ; see Figure 12 (the left plot). The theoretical partial ACF with Lag = = 1 is also given; see Figure 12 (the right plot). Figure 12 (the right plot) reveals that the first lag value is statistically significant, whereas the other partial autocorrelations for all other lags are not statistically significant. Based on Figure  10 (the top left panel), the initial NKDE is an asymmetric density with left tail. On the other hand, the density of the novel model contains the left tail shape, this matching and this interview is important in statistical modeling. Hence, the QPGWL model is recommended for model the insurance claim's payments data.
The five measures are estimated for the QPGWL and GWL models. The GWL model is the better model for this application. Table 5 reports the KRIs for the QPGWL and GWL models. The GWL distribution was chosen because it is the base line distribution on which the new distribution is based. For the QPGWL model, the quantity of the VaRq ( ; , , ) ranges from 0.06108605| = 60% to 0.46051700| = 99.9%, however, for the GWL model, the quantity VaRq ( ; , , ) ranges from 0.03405504| = 60% to 0.10564970| = 99.9%. The TVaRq ( ; , , ) ranges from 0.1277527| = 60% to 0.5271837| = 99.9%. However, for the GWL model, the quantity TVaRq ( ; , , ) ranges from 0.1007217| = 60% to 0.1723164| = 99.9%. The TVq ( ; , , )  TMVq for GWL | = 60% < TMVq for GWL | = 65% <…< TMVq for GWL | = 99.9%, Figure 9: Cullen and Frey plot for the claims data. Figure 13 gives VaRq, TVaRq, TMVq and its corresponding Q-Q plots for QPGWL and GWL models respectively. Figure 13 (first column) represents the VaRq, TVaRq, TMVq for the two competitive models. Figure 13 (second column) shows the Q-Q plots for the VaRq, TVaRq, TMVq for the QPGWL model. Figure 13 (second column) gives the Q-Q plots for the VaRq, TVaRq, TMVq for the GWL model. Each plot of Figure 13 (first column) provides a graphical comparison between QPGWL and GWL models. Based on Figure 11m, the QPGWL model has a heavier tail than the GWL distribution for all KRIs.    Figure 12: The ACF, and the partial ACF for the insurance claims data  Figure 13: VaRq, TVaRq, TMVq and its corresponding Q-Q plots for the QPGWL and GWL models, respectively.

Conclusions
The quasi-Poisson generalized Weibull Lomax distribution, a new three-parameter compound Lomax extension, is derived and examined in this study. Based on the generalized Weibull Lomax model and the compounding Poisson family, the quasi-Poisson generalized Weibull Lomax model is developed. The new density can be "monotonically declining," "symmetric," "bimodal-asymmetric," "asymmetric with right tail," "asymmetric with wide peak," or "asymmetric with left tail." The new hazard rate can take the following shapes: "J-shape," "bathtub (U-shape)," "upside down-increasing," "decreasing-constant," and "upside down-increasing." Relevant mathematical properties are determined, including mean waiting time, mean deviation, raw and incomplete moments, residual life moments, and moments of the reversed residual life. We used some common copulas, including the Farlie-Gumbel-Morgenstern copula, the Clayton copula, the modified Farlie-Gumbel-Morgenstern copula, and the Ali-Mikhail-Haq copula, to present some new bivariate quasi-Poisson generalized Weibull Lomax distributions for the bivariate mathematical modelling. Additionally, an application of the quasi-Poisson generalized Weibull Lomax distribution is reported by the analysis of two real data sets. Based on two real data sets, the Poisson exponentiated exponential Lomax model To represent count real-life data, it is suggested that a novel discrete quasi-Poisson generalized Weibull Lomax model be presented; for more details, see Aboraya