Main Article Content


In regression analysis, autocorrelation of the error terms violates the ordinary least squares assumption that the error terms are uncorrelated. The consequence is that the estimates of coefficients and their standard errors will be wrong if the autocorrelation is ignored. There are many tests for autocorrelation, we want to know which test is more powerful. We use Monte Carlo methods to compare the power of five most commonly used tests for autocorrlation, namely Durbin-Watson, Breusch-Godfrey, Box–Pierce, Ljung Box, and Runs tests in two different linear regression models. The results indicate the Durbin-Watson test performs better in the regression model without lagged dependent variable, although the advantage over the other tests reduce with increasing autocorrelation and sample sizes. For the model with lagged dependent variable, the Breusch-Godfrey test is generally superior to the other tests.
R code for Power Comparison of the Five Autocorrelation Tests is provided.


Correlated error terms Ordinary least squares assumption Residuals Regression diagnostic Lagged dependent variable

Article Details

How to Cite
Uyanto, S. S. (2020). Power Comparisons of Five Most Commonly Used Autocorrelation Tests. Pakistan Journal of Statistics and Operation Research, 16(1), 119-130.


  1. Asteriou, D. and Hall, S. G. (2017). Applied Econometrics. Palgrave Macmillan, New York, N.Y., 3rd edition.
  2. Box, G. E. P. and Pierce, D. A. (1970). Distribution of residual autocorrelations in autoregressive-integrated moving average time series models. Journal of the American Statistical Association, 65(332):1509 – 1526.
  3. Breusch, T. S. (1978). Testing for autocorrelation in dynamic linear models. Australian Economic Papers., 17:334 – 355.
  4. Durbin, J. and Watson, G. S. (1950). Testing for serial correlation in least squares regression i. Biometrika, 37(3/4):409 – 2428.
  5. Durbin, J. and Watson, G. S. (1951). Testing for serial correlation in least squares regression ii. Biometrika, 38(1/2):159 – 177.
  6. Durbin, J. and Watson, G. S. (1971). Testing for serial correlation in least squares regression iii. Biometrika, 58(1):1 – 19.
  7. Geary, R. (1970). Relative efficiency of count of sign changes for assessing residual autoregression in least squares regression. Biometrika, 57(1):123 – 127.
  8. Godfrey, L. G. (1978). Testing against general autoregressive and moving average error models when the regres- sors include lagged dependent variables. Econometrica, 46(6):1293 – 1302.
  9. Greene, W. H. (2018). Econometric Analysis. Pearson Education, Inc., New York, N.Y., 8th edition.
  10. Gujarati, D. N. and Porter, D. C. (2009). Basic Econometrics. McGraw-Hill/Irwin, New York, NY, 5th edition.
  11. Harvey, A. C. (1990). The Econometric Analysis of Time Series. MIT Press, London, UK, second edition.
  12. Hyndman, R. J. and Athanasopoulos, G. (2013). Forecasting: principles and practice. OTexts, Melbourne, Australia.
  13. L’Esperance, W. L. and Taylor, D. (1975). The power of four tests of autocorrelation in the linear regression model. Journal of Econometrics, 3(1):1 – 21.
  14. Ljung, G. M. and Box, G. E. P. (1978). On a measure of lack of fit in time series models. Biometrika, 65(2):297-303.
  15. Plackett, R. L. (1949). A historical note on the method of least squares. Biometrika, 36(3/4):458 – 460. Plackett, R. L. (1950). Some theorems in least squares. Biometrika, 37(1/2):149 – 157.
  16. R Core Team (2019). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. URL
  17. Smith, V. K. (1976). The estimated power of several tests for autocorrelation with non-first- order alternatives.
  18. Journal of the American Statistical Association, 71(356):879 – 883.
  19. Verbeek, M. (2017). A Guide to Modern Econometrics. John Wiley & Sons, Inc., Hoboken, NJ, 5th edition.
  20. Wald, A. and Wolfowitz, J. (1943). An exact test for randomness in the non-parametric case based on serial correlation. The Annals of Mathematical Statistics, 14(4):378 – 388.