Main Article Content


The aim of this paper is to estimate probability distribution functions with maximum entropy and known quantiles‎. ‎The paper formulates the problem as a nonlinear optimization problem‎, ‎and converts it into a system of nonlinear equations by Lagrange multipliers method‎. ‎Finally‎, ‎an efficient method is proposed to obtain a solution of the nonlinear system‎. ‎The method needs to solve a linear programming problem in each iteration‎. ‎Since linear programming problems can be solved in a reasonable time‎, ‎our proposed method is faster than generic methods of solving nonlinear programming problems‎. ‎Several computational experiment are provided to demonstrate the performance and validation of our proposed method‎.


Maximum entropy problem‎ ‎Nonlinear optimization‎ ‎Lagrange multipliers method‎ ‎Linear programming‎.

Article Details

How to Cite
Nikooravesh, Z., & Tayyebi, J. (2020). A linear programming-based approach to estimate discrete probability functions with given quantiles. Pakistan Journal of Statistics and Operation Research, 16(4), 839-849.


  1. Aldrich, J. et al. (1997). Ra fisher and the making of maximum likelihood 1912-1922. Statistical science,12(3):162–176.
  2. Arandjelovi´ c, O., Pham, D.-S., and Venkatesh, S. (2014). Two maximum entropy-based algorithms for running quantile estimation in nonstationary data streams. IEEE Transactions on circuits and systems for video technology, 25(9):1469–1479.
  3. Bajgiran, A. H., Mardikoraem, M., and Soofi, E. S. (2020). Maximum entropy distributions with quantile information. European journal of operational research.
  4. Barzdajn, B. (2014). Maximum entropy distribution under moments and quantiles constraints. Measurement,
  5. :102–107
  6. Basset, N.(2015). A maximal entropy stochastic process for a timed automaton. Information and Computation, 243:50–74.
  7. Chliamovitch, G., Dupuis, A., and Chopard, B. (2015). Maximum entropy rate reconstruction of markov dynamics. Entropy, 17(6):3738–3751.
  8. Cover, T. and Thomas, J. (2006). Elements of Information Theory, (2nd edn, 2006).
  9. Dai, H., Zhang, H., and Wang, W. (2016). A new maximum entropy-based importance sampling for reliability analysis. Structural Safety, 63:71–80.
  10. Deuflhard, P. (2011). Newton methods for nonlinear problems: affine invariance and adaptive algorithms, volume 35. Springer Science & Business Media.
  11. Hosking, J. R. (1990). L-moments: Analysis and estimation of distributions using linear combinations of order statistics. Journal of the Royal Statistical Society: Series B (Methodological), 52(1):105–124.
  12. Jaynes, E. T. (1957). Information theory and statistical mechanics. ii. Physical review, 108(2):171.
  13. Krvavych, Y. and Mergel, V. (2000). Large loss distributions: probabilistic properties, evt tools, maximum entropy characterization. In Proceedings of the 31st ASTIN Colloquium, Sardinia, Italy.
  14. Landsman, Z. and Makov, U. E. (1999). Credibility evaluation for the exponential dispersion family. Insurance: Mathematics and Economics, 24(1-2):23–29.
  15. Najafabadi, A. T. P., Hatami, H., and Najafabadi, M. O. (2012). A maximum-entropy approach to the linear credibility formula. Insurance: Mathematics and Economics, 51(1):216–221.
  16. Oosterbaan, R. (2019). Software for generalized and composite probability distributions. International Journal of Mathematical and Computational Methods, 4.
  17. Sachlas, A. and Papaioannou, T. (2014). Residual and past entropy in actuarial science and survival models. Methodology and Computing in Applied Probability, 16(1):79–99.
  18. Shannon, C. E. (1948). A mathematical theory of communication. The Bell system technical journal, 27(3):379–423.
  19. Templeman, A. B. and Xingsi, L. (1987). A maximum entropy approach to constrained non-linear programming. Engineering Optimization+ A35, 12(3):191–205.
  20. Terlaky, T. (2013). Interior point methods of mathematical programming, volume 5. Springer Science & Business Media.
  21. Van der Straeten, E. (2009). Maximum entropy estimation of transition probabilities of reversible markov chains. Entropy, 11(4):867–887.
  22. Zhao, Z. and Zhang, Y. (2011). Design of ensemble neural network using entropy theory. Advances in Engineering Software, 42(10):838–845.
  23. Zografos, K. (2008). On some entropy and divergence type measures of variability and dependence for mixed continuous and discrete variables. Journal of statistical planning and inference, 138(12):3899–3914.