http://pjsor.com/index.php/pjsor/gateway/plugin/ThesisFeedGatewayPlugin/atomPakistan Journal of Statistics and Operation Research: Thesis Abstracts2017-03-04T06:46:48-06:00Open Journal SystemsPakistan Journal of Statistics and Operation Researchhttp://pjsor.com/index.php/pjsor/thesis/view/45On some new stochastic orders and their properties in the statistical reliability theory2017-03-04T06:46:48-06:00Mervat MahdyDepartment of Statistics, Mathematics, & Insurance, College of Commerce, Benha University, Egypt<br />February, 2009<br /><br />Various concepts of stochastic comparison between random variables have been defined and studied in the literature since they are useful in reliability modeling and in economics applications and as mathematical tools for proving important results in applied probability. Some well-known orders that have been introduced and studied in reliability theory such as the usual stochastic order, the hazard rate order, mean residual life order, mean inactivity time order, were mentioned in this dissertation. As for stochastic orders, several non-parametric classes of life distributions were proposed in this dissertation to provide some characterizations of inactivity time. First, we studied mean inactivity time, and provided the properties of it in reliability theory, and provided the relation between it and usual stochastic order, hazard rate order, reversed hazard rate order, total time on test transform order. Second, we studied new non-parametric class of life distributions based on the median of inactivity time and studied its reliability properties. Some new results of the proposed class were given including some closure properties and characterizations. We also studied a new stochastic ordering based on the median of inactivity time and revealed its relationship with other well-known orders. Also, we provided some characterizations of some well-known life distributions via their median inactivity time functions. We studied further new non-parametric class of life distributions based on the variance of inactivity time. Also, we studied closure properties of this class under relevant reliability operations such as mixing, convolution and formation of coherent systems. We showed that variance inactivity time is closed under convolution, mixing, and coherent systems; The applications of inactivity time functions were presented in this dissertation, where we studied the comparison between mean inactivity time class and median inactivity time of some distributions. Connection between mean inactivity time and variance inactivity time was presented. The problems of testing increasing mean inactivity time, median inactivity time, and increasing variance inactivity time were investigated. Finally, we presented the conclusion and further extensions.2017-03-04T06:46:48-06:00http://pjsor.com/index.php/pjsor/thesis/view/44Estimation of Balanced and Unbalanced Dynamic Panel Data Models2017-02-09T00:36:14-06:00Muhammad AbdullahStatistics, Bahauddin Zakariya University, Multan<br />June, 2016<br /><br />To search any specific pattern for randomly collected observations on a particular phenomenon has been a great intension of researchers since very ancient times. The functional form and the different methods of estimation of a classical model are main and powerful sources to satisfy this intension. In the course, panel data model is an attempt of the same type. Whereas the dynamic phenomenon can easily be examined in case of panel data models and for this dynamic adjustment we are introduced with the dynamic panel data models (<i>DPDMs</i>). For the specific feature of the <i>DPDMs</i>, significant complications are introduced in estimation. In both fixed and random effects situations (depending on assuming the individual and time effects either to be fixed or random), the lagged dependent variable , a right-hand regressor is correlated with the error term even if the stochastic error term is assumed not to be serially correlated. This characteristic renders the ordinary least squares (<i>OLS</i>), least squares dummy variable (<i>LSDV</i>) even <i>GLS</i> estimators biased and inconsistent. In this dissertation, mainly this problem of <i>DPDMs</i> has been addressed. Though, a number of precious attempts have been carried out to deal with this problem, but none was the final and perfect. Different comparisons showed that Bias-corrected First-Differenced <i>OLS</i>(<i>BCFD</i>) estimator out performed most of the estimators like <i>LSDV</i>, <i>GMM</i>, First-Differenced <i>OLS</i> (<i>FD</i>) and Bias-Corrected-<i>LSDV</i> etc, in terms of bias. We proposed a new bias-corrected-<i>LSDV</i> estimator (<i>BCLSDV</i>). This proposal approximately equally performs to the <i>BCFD</i> estimator, in terms of bias but leads in terms of variance and <i>MSE</i>’s. This is because; the inference based on <i>BCLSDV</i>, becomes more size persistent and powerful. The best performance of the proposal has also been verified in case of unbalanced dynamic panel data models. As an additional work, the analytical expressions for the biases in <i>GLS</i> estimators of <i>DPDM</i>s allowing heteroscedasticity have also been obtained.2017-02-09T00:36:14-06:00http://pjsor.com/index.php/pjsor/thesis/view/43Application of nonparametric estimation methods for the study of the strong stability of queueing systems2016-08-10T07:46:36-05:00Aicha BarecheTechnology, University of Bejaia<br />November, 2008<br /><br />In this thesis, we prove the applicability of the strong stability method in the study of some classical queueing systems when one of their governing distributions is general and unknown. In this case, we should use nonparametric estimation methods to estimate the unknown density function of the considered distribution. We apply the kernel density method and the techniques of boundary correction (Schuster estimator, asymmetric kernels and smoothed histograms) to measure the performance of the strong stability in the study of some classical queueing systems when one of their governing distributions is general and unknown. We consider tow types of parameter perturbation : perturbation of the arrival flow and perturbation of the service times. We are interested in each case in the evaluation of the proximity of the two considered systems, characterized by the appropriate variation distance and in the determination of the approximation error on the corresponding stationary distributions. The simulation results presented in this work show the interest of the application of nonparametric estimation methods to verify and consolidate the hypothesis of smallness of the perturbation done when applying the strong stability method to determine the proximity error of two queueing systems. Again, these results underline the importance of taking into consideration the correction of boundary effects to determine an approximation error on the stationary distributions between two queueing systems when applying the strong stability method in order to substitute the characteristics of the real complicated system by those of the ideal simple one.2016-08-10T07:46:36-05:00http://pjsor.com/index.php/pjsor/thesis/view/39Bayesian and non-Bayesian estimation for Weibull parameters based on generalized type-ii progressive hybrid censoring scheme2016-05-27T16:18:29-05:00Ahmed ElshahhatDepartment of Mathematical Statistics, Cairo University<br />March, 2016<br /><br />Bayesian and non-Bayesian estimators have been obtained for the unknown parameters of Weibull distribution based on the generalized Type-II progressive hybrid censoring scheme and different special cases have been obtained. The asymptotic variance covariance matrix and approximate confidence interval based on the asymptotic normality of the maximum likelihood estimators have been obtained. Bayes estimates and Bayes risks have been developed under a squared error loss function using informative and non-informative priors for the unknown Weibull parameters. It has been seen that the estimators obtained are not available in nice closed forms, although they can be easily evaluated for a given sample by using suitable numerical methods. Therefore, a numerical example is considered to illustrate the proposed estimators.2016-05-27T16:18:29-05:00http://pjsor.com/index.php/pjsor/thesis/view/40Estimation of functionals of multidimensional distributions from incomplete samples2016-05-10T14:49:05-05:00Rustamjon Sobitkhonovich MuradovProbability Theory and Mathematical Statistics, National University of Uzbekistan<br />April, 2012<br /><br />Construction and investigation of new copula estimators of functionals of distributions in case of multivariate dependent complete and censored data. All results are: – a estimators for survival functions and its mixtures with using Archimedean copulas under one-dimensional and multidimensional random censoring from the right are constructed; – uniform consistency and asymptotical Gaussenes of Archimedean copula estimators of one-dimensional survival functions and its mixtures are proved; – uniform consistency and asymptotical Gaussenes of Archimedean copula estimators of multidimensional survival functions and its mixtures are proved; – consistency results for estimators of bivariate survival functions of exponential and product structures under random censoring from the right are proved.2016-05-10T14:49:05-05:00http://pjsor.com/index.php/pjsor/thesis/view/41Some Remarks on Skew t distribution (2df)2016-05-10T14:48:50-05:00Muhammad Ahsan ul HaqCollege of Statistical & Actuarial Sciences, University of the Punjab<br />May, 2015<br /><br />The skew t (2df) distribution proposed by Azzalini (1985) is a useful model to describe the skewed data sets with heavy tails. Since the moments of this distribution do not exist, not much literature is available on it. Its study in this research is based on large populations using some selected values of the distribution’s skew parameter. The impact of this parameter is determined on its shape characteristics, distribution function, quartiles and other measures including hazard curves and Shannon entropy. The estimators of the skew parameter by maximum likelihood estimation and moments are determined and their efficiencies investigated concerning Fisher information through simulation procedures drawing random samples of various sizes. In addition, for Bayesian analysis the skew parameter is assigned a uniform prior distribution over the range (a, 0) for a<0, or (0, a) for a>0 and the impact of fixed values of z is ascertained on the relationship between posterior mean and z for selected values of a as well as that between the posterior mean and ‘a’ for selected values of z. The associated variances of the posterior distribution are also determined. The skew t distribution (2df) is applied to a data set consisting of 63 breaking strengths of fibreglass from Smith & Naylor, (1987) estimating lambda by the method of maximum likelihood and Moment Estimator.2016-05-10T14:48:50-05:00http://pjsor.com/index.php/pjsor/thesis/view/38Stochastic decomposition property in a queuing system with reminders2016-03-05T11:25:56-06:00Mohamed BoualemTechnology, Bejaia<br />June, 2009<br /><br />In this work, we deal with retrial queueing systems with vacation. This type of systems differs from the classical queueing systems by the existence of two supplementary parameters: retrials and vacations. In the first part of this thesis, we started by actualizing a synthesis on the most known works on queues (with retrials, with vacations and with retrials and vacations). We placed more emphasis on retrial policies, and we gave motivations and application fields for each considered type of queues. After that, we considered the particular example of an M/ G/1 queue with classical retrial policy in which the server operates under a general exhaustive service vacation policy. We carried out an extensive stationary analysis of this system, including the existence of stationary regime, embedded Markov chain and steady state distribution of the server state. We also derived formulas for the limiting distribution of the server state. Because of the complexity of retrial queueing models, analytic results are generally difficult to obtain or are not very exploitable in practice. To resolve the problem, there are many numerical and approximation methods. In the second part of this work, we focused on monotonicity properties which allow establishing some stochastic bounds helpful in understanding complicated models by simpler models for which an evaluation can be made. We considered the particular example of an M/G/1 queue with constant retrial policy and server vacations. We derived several stochastic comparison properties in the sense of strong stochastic ordering and convex ordering. The stochastic inequalities provide simple insensitive bounds for the stationary queue length distribution.2016-03-05T11:25:56-06:00http://pjsor.com/index.php/pjsor/thesis/view/37Statistical distributions and modelling of GPS-Telemetry elephant movement data including the effect of covariates2016-03-05T11:24:11-06:00ROBERT MATHENGE MUTWIRIStatistics, University of Kwazulu Natal<br />March, 2015<br /><br />In this thesis, I investigate the application of various statistical methods towards analysing GPS tracking data collected using GPS collars placed on large mammals in Kruger National Park, South Africa. Animal movement tracking is a rapidly advancing area of ecological research and large amount of data is being collected, with short sampling intervals between successive locations. A statistical challenge is to determine appropriate methods that capture most properties of the data is lacking despite the obvious importance of such information to understanding animal movement. The aim of this study was to investigate appropriate alternative models and compare them with the existing approaches in the literature for analysing GPS tracking data and establish appropriate statistical approaches for interpreting large scale mega-herbivore movement patterns. The focus was on which methods are the most appropriate for the linear metrics (step length and movement speed) and circular metrics (turn angles) for these animals and the comparison of the movement patterns across herds with covariate. A four parameter family of stable distributions was found to better describe the animal movement linear metrics as it captured both skewness and heavy tail properties of the data. The stable model performed favourably better than normal, Student's t and skewed Student's t models in an ARMA-GARCH modelling set-up. The flexibility of the stable distribution was further demonstrated in a regression model and compared with the heavy tailed t regression model. We also explore the application circular linear regression model in analysing animal turn angle data with covariate. A regression model assuming Von Mises distributed turn angles was shown to fit the data well and further areas of model development highlighted. A couple of methods for testing the uniformity hypothesis of turn angles are presented. Finally, we model the linear metrics assuming the error terms are stable distributed and the turn angles assuming the error terms are von Mises distributed are recommended for analysing animal movement data with covariate.2016-03-05T11:24:11-06:00http://pjsor.com/index.php/pjsor/thesis/view/36Control Charting Methodology for Censored Data2016-02-02T03:30:34-06:00Syed Muhammad Muslim RazaDepartment of Statistics, Quaid-e-Azam University Islamabad.<br />April, 2011<br /><br />As manufacturing technologies are developing rapidly therefore most of the products have been designed with high reliabilities. Life time data are often collected with high censoring to save time and cost. An important issue in life-testing applications for industrial engineering is how to develop a control chart for monitoring the mean life time of products under censoring. In current study we are interested in dealing with type I censoring data using the quality tools. Type I censoring occurs if an experiment has a set number of subjects or items and stops the experiment at a predetermined time C, at which point any subjects remaining are right-censored. By using Statistical Quality Control tool Conditional expected value control chart (CEV Chart) we can get most reliable results and can detect the assignable causes more rapidly in presence of censored data. Many researchers/statistician have done impressive work on Conditional expected value control chart (CEV) using different Probability distributions.The Exponentially weighted moving average(EWMA) and Cumulative sum(CUSUM) charts are used for dealing with life-time distributions therefore, the concept of censoring is dealt with EWMA and CUSUM methodologies for Life time distributions. In this thesis two methods are developed i.e. first method is EWMA censoring method and second is CUSUM censoring method.CEV EWMA method and CEM EWMA method is developed for Rayleigh distribution and results are compared which shows CEM method performs better than CEV method for high censoring rates. First method deals with exponentially weighted moving average (EWMA) control charts for monitoring the mean level of the Rayleigh lifetimes under the type I censored data are considered. Type I censored testing method is developed based on the conditional expected values (CEV). A new conditional expected median (CEM) method is also introduced. As the control limits are hard to be found analytically therefore an algorithm is developed to find the limits of control chart numerically. In the presence of censored data conventional Shewhart control chart can be used by ignoring the censoring if the censoring rate is low but when the censoring rate is high then “np” chart can be used by recording the number censored observations. Where as the CEV and CEM Shewhart structures are also developed for dealing the censored data with moderate censoring rates and their results are compared. We have considered a system whose lifetime has a Rayleigh distribution with scale parameter whose value is known. For saving time and cost all items are tested under type I censoring scheme using EWMA (CEV and CEM) control chart. When all the items are put on test in initial time, the items lifetime is recorded exactly only if it is less than or equal to a predetermined time C. This predetermined time C is also called censoring time. The second method developed is Cumulative sum (CUSUM) methodology for dealing censored data. In this method CUSUM control chart is developed in which control limits are transformed to deal with the censored data. In this method we have deal with scale parameter. In comparison to CEV control charts in which conditional expected value is used, the new CEM method using conditional expected median provides more reliable results for high censoring rates where as for low censoring rates CEV control charts provides better results.2016-02-02T03:30:34-06:00http://pjsor.com/index.php/pjsor/thesis/view/35Momentum Strategies and Karachi Stock Exchange2016-01-08T08:06:52-06:00Sher KhanInstituate of business and managment sciences, The Universiyt of Agriculture Peshawar<br />January, 2016<br /><br />The objectives of this study is to investigate the momentum effect in Karachi stock exchange by taking the CAPM model for risk analysis as an assumption of investor’s momentum returns. This study analyzed 16 momentum strategies based on partial rebalancing, Docile, and equal weighted techniques. The data of 83 companies listed at KSE-100 Index from 2007 to 2014 has used for analysis. The returns of winner portfolio were positive only in 1 out of 16 strategies while the return of zero cost portfolio were positive in four out of 16 strategies. Moreover a diminishing trend in losses stated in 14 strategies has observed. Our analysis confirmed that loser portfolio is solitarily producing profit of our zero cost portfolio. We have also examined that weather returns have been earned due to Manager Performance or excessive Systematic Risk. In all momentum strategies of (3/3), the value of beta and alpha confirmed that Return can be boosted by taking short position in loser’s portfolio with respect to winner portfolio. This study concluded that Winner and winner minus loser’s portfolio firms of KSE do not follow the momentum effect while loser’s portfolio firms of KSE follow the momentum effect. This study concluded and found low and significant momentum effect at Karachi stock exchange and these results are aligned with Mohsin (2012), Ji, Griffin and Martin (2003), Chui, and Rouwenhorst (1999). Further possibilities of momentum is exist in KSE if the sample is increased and use the daily data of listed company at KSE-100 index. Keywords:Karachi Stock Exchange, Momentum strategies, Winner portfolio, Loser’s Portfolio, Zero cost portfolios, Systematic Risk.2016-01-08T08:06:52-06:00http://pjsor.com/index.php/pjsor/thesis/view/10Some new second-order response surface designs robust to missing observations2015-10-06T16:49:39-05:00Tanvir AhmadStatistics, The Islamia University of Bahawalpur<br />March, 2011<br /><br />Main purpose of this research project is the study and construction of different types of second-order response surface designs which are more robust to missing data than the competitive designs, of the similar structure, in the literature. Experiments designed to study the effect of some factors on a process have wide application in modern scientific research. Researchers perform a series of experimental trials which guides them in modeling the effect of the input variables on the response of the process. The problem of missing observations, in a well-planned experiment, has drawn attention of many researchers in the recent years. During experimentation, one can not avoid the situation that some observations are lost, destroyed or unavailable due to some uncontrollable reason/s. The unavailability of some observation/s from the design, usually, has serious repercussions on the estimates of the assumed (polynomial) model. Missing observations being an ubiquitous problem also complicates the statistical analysis of data. Hence, missing observations can affect the results of a designed experiment quite badly. The problem becomes more serious specially in case of one-off experiments or high cost experiments, where repeating the experiment becomes almost impossible. Designs which are comparatively more robust to missing observations can attract the user since they are more reliable. In this research work, Subset designs, constructed by Gilmour (2006), are studied for their robustness to missing observations in different experimental regions. New minimax loss subset designs are constructed and their robustness is also studied for a single missing observation. Another class of three level designs, Augmented pairs designs developed by Morris (2000), is also studied for robustness to missing data. Two new classes of designs are also constructed following the structure of augmented pairs designs called augmented pairs minimax loss designs and augmented pairs spherical designs. New designs are also studied for missing data. Repaired resolution central composite designs were developed by Block and Mee (2001) in a spherical region of experimentation. Following the main structure of repaired resolution central composite designs, new designs are constructed using minimax loss criterion and studied for missing observation. All classes of designs discussed in this project are also compared using well-known optimality criteria.2015-10-06T16:49:39-05:00http://pjsor.com/index.php/pjsor/thesis/view/21Management of IGR in IFE south local government of osun state2015-10-06T16:49:14-05:00ADEOLA AJAYIPUBLIC ADMINISTRATION, OBAFEMI AWOWOLOWO UNIVERSITY<br />August, 2011<br /><br />ABSTRACT This study focused on the management of Internally Generated Revenue (IGR) Ife South Local Government. It also identified viable sources of revenue in the Local Government, and it examined problems militating against effective collection of revenue. This study was necessitated by the need to ensure increased revenue generation in the Local Government. Primary and Secondary sources of data were utilized for the study. The primary data were collected through structured questionnaires. Respondents were selected from political office holders and career officers on GL. 03-16 in departments and units of Finance and Supplies, Administration, Primary Health CSare, Agriculture, Town Planning, Estate and Valuation of the Local Government, thus 50 respondents were sampled representing 10.12 percent of the 472 staff strength of these departments and units of the local government. The questionnaires were administered using simple technique and analyzed by the use of simple statistical techniques such as frequency distribution and percentage. Secondary data were generated from relevant textbooks, journals, Internet sources, and lecture notes delivered at various workshops, conferences and seminars on the field of revenue. The study revealed that there are many viable sources of revenue open of the local government, the myriad of problem militating against effective collection of the revenue and the poor management of Internally Generated Revenue which aptly explains why local government in Nigeria could not be said to be people-oriented. The study concluded that the share of local government from the statutory allocation be increased, that the local government should also intensify her efforts on increased revenue generation in order to withstand the challenged posed by the current global economic crisis and that the Local Government should be managed by transformed and rebranded leadership for transparency and accountability. Routine auditing and post auditing from the supervising ministry should be encouraged at the Local Government level.2015-10-06T16:49:14-05:00http://pjsor.com/index.php/pjsor/thesis/view/22Reasoning with incomplete information: within the framework of bayesian networks and influence diagrams2015-10-06T16:48:07-05:00Tracey Claire AhmedApplied Mathematics and Operational Research, Cranfield University, UK<br />January, 2009<br /><br />Human cognitive limitations make it very difficult to effectively process and rationalise information in complex situations. To overcome this limitation many analytical methods have been designed and applied to aid decision-makers in complex situations. In some cases, the information gained is comprehensive and complete. However, very often it is the case that information regarding the situation is incomplete and uncertain. In these cases it is necessary to reason with incomplete and uncertain information. The probabilistic graphical models known as Bayesian Networks and Influence Diagrams provide a powerful and increasingly popular framework to represent such situations. The research described here makes use of this framework to address a number of aspects relating to incomplete information. The methods presented are intended to provide support in areas of measuring the completeness of information, assessing the trade-off of speed versus quality of decision-making and incorporating the impact of unrevealed information as time progresses. Two measures are investigated to determine the completeness levels of influential observable information. One measure is based on mutual information. This measure is ultimately shown to fail, however, since it can result in a negative completeness value. The other measure focuses on the range reductions of either the probabilities (for the Bayesian Networks) or the utilities (for the Influence Diagrams) when observations are made. Analytical models were developed to determine the trade off between waiting for more information or making an immediate decision. A number of experiments involving participants in imaginary decision-making scenarios were also conducted to gain an understanding of how people intuitively weight such choices. The value of unrevealed information was utilised by applying likelihood evidence. Unrevealed information relates to something we are looking for but have not yet found. The longer time passes without it being found, the more confident we can become that it is not actually there.2015-10-06T16:48:07-05:00http://pjsor.com/index.php/pjsor/thesis/view/23Development of multivariate estimators and their applications2015-10-06T16:47:44-05:00Nadeem Shafique ButtCollege of Statistical and Actuarial Sciences, University of the Punjab, Lahore<br />May, 2012<br /><br />Thesis deals with the development of new univariate and multivariate estimators for single-phase, two-phase and multi-phase sampling based on auxiliary variables and as well as auxiliary attributes. Some available popular estimators have been discussed in chapter 1 and 2 of this thesis. In chapter 3 new univariate estimators for two phase sampling has been proposed. The proposed estimators are extension of the estimator proposed by Roy(2003). The proposed estimator use information on multiple auxiliary variables as well as on multiple auxiliary attributes. Shrinkage versions of proposed estimators have also been given in Chapter 3. The empirical study of proposed estimators has been conducted see its performance as compared with classical regression estimator. It has been observed that the proposed estimators are always more precise as compared with classical regression estimator for both quantitative and qualitative auxiliary variables. In chapter 4 new multivariate estimators for two phase sampling has been proposed which are the multivariate versions of Roy(2003) estimator. The proposed estimators use information on multiple quantitative variables and as well as multiple qualitative variables. The empirical study based on Eigen values of variance-covariance matrices has also been conducted to see the performance of the proposed estimators over the estimator proposed by Ahmed, Hussin, & Hanif(2010). The results of empirical study shows that proposed estimator perform far better than the multivariate regression estimator proposed by Ahmed, et al.(2010) Multivariate estimators proposed in chapter 4; as well as proposed by Ahmed, et al.(2010); for simultaneous estimation of several study variables require that all variables depend upon same set of auxiliary variables . This situation is not always feasible as different response variables may depend on different set of predictors. In this situation different estimation mechanisms are required. The seemingly unrelated regression models of Zellner(1962) have been popular models for simultaneous prediction of multiple response variables which depends on different set of predictors. The concept of seemingly unrelated regression models has been used for simultaneous estimation of multiple response variables which depends on different predictors. Seemingly Unrelated Regression Estimators (SURE’s) have been proposed in Chapter 5 of this thesis. SURE has been developed for Sing-Phase, two-Phase and Multiphase sampling. The applicability of SURE’s is much wider as compared with multivariate regression estimators available in literature.2015-10-06T16:47:44-05:00http://pjsor.com/index.php/pjsor/thesis/view/27Statistical variation on rainfall2015-10-06T16:47:14-05:00PABITRA SAHOOSTAISTICS, COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY<br />August, 2013<br /><br />The present research “STATISTICAL VARIATION ON RAINFALL” aimed at quantifying the change of rainfall at India's all states (35). The states taken up for study of forecast used the last 5 year data’s (2008-2012). Therefore, the main objective of the research was to find the rainfall of all the states in this period and it will help us to find out that how monthly, seasonal and annual rainfall varies from year to year. The value of effective rainfall is computed on a probability.2015-10-06T16:47:14-05:00http://pjsor.com/index.php/pjsor/thesis/view/24Estimation and prediction from exponentiated weibull distribution2015-10-06T16:46:10-05:00Mahmoud Ali SelimStatistics, Al-Azhar<br />May, 2010<br /><br />This thesis, deals with problems of estimating parameters and prediction for the two-parameter exponentiated Weibull (α,θ) distribution based on lower record values. The maximum likelihood estimators of the two unknown parameters of exponentiated Weibull distribution are derived. The asympotitic variance covariance matrix and the sampling distribution of the two unknown parameters of exponentiated Weibull distribution based on lower record values are obtained numerically. Also, Bayesian approach have been used to obtain the estimators of the parameters of exponentiated Weibull distribution based on m lower record values. Bayes estimators have been developed under squared error and LINEX loss functions. These estimators are derived using the informative and non-informative prior distributions. Maximum likelihood method, highest conditional method and Bayesian method are used to predict the nth future lower record values based on the first (m) lower records from exponentiated Weibull distribution. Numerical computations are given to illustrate the theoretical results. Lastly, numerical comparison between the non Bayesian results with the Bayesian results will be discussed. Chapter I is considered as an introductory chapter. Chapter II is devoted to the definitions and notations which will be used in this thesis. The work developed in chapters III and IV is the background information, which already was presented by other researchers. In chapter III the statistical characterizations of the exponentiated Weibull distribution and the literature review concerning the properties of the distribution are presented. Chapter IV is devoted to literature review of the important subjects which use in this thesis such as, Bayesian and non Bayesian estimation based on record values and Bayesian and non Bayesian prediction for the nth future record values. Chapter V will be concerned with the estimation problem of the two shape parameters of the exponentiated Weibull distribution based on lower record values. Also, Bayesian and non Bayesian prediction for the nth future lower record values from exponentiated Weibull distribution will be obtained. At the end of this thesis, tables of the results, mathcad program and references will be listed in appendices.2015-10-06T16:46:10-05:00http://pjsor.com/index.php/pjsor/thesis/view/29Multivariate techniques in crime data analysis: an assessment of utilized and alternative statistical methods2015-10-06T16:45:34-05:00Abdulhameed Ado OsiMathematics, Ahmadu Bello University, Zaria<br />November, 2014<br /><br />The scope of crime and the concern for its prevention/control has grown considerably in the last few years in Nigeria. Therefore, how to discover the variables which have salience in affecting crime rate has become crucial. In this thesis, some of Multivariate Statistical Techniques were utilized on the crime data of Nigeria. Specifically, Principal Component Analysis (PCA) is used to discover the distinct influential variables in the identification of State with high or low crime rate; evaluates and compares the performances of five different Classification Techniques (i.e., Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA), K-Nearest Neighbour Analysis (KNN), Classification Trees (CT) and Logistic Discriminant Analysis (LgDA)) in the classification of States of Nigeria as high and low crime rate (unsafe and safe). Each method has unique assumptions about the data, so each may be appropriate for different situations. The results show that four Principal components have been retained using both scree plot and Kaiser’s criterion which accounted for 75.024% of the total variation. QDA had the best overall classification performance by classifying 100% of the States correctly, followed by LDA which had only 13.9% apparent error rate. LgDA is recommended to be used when QDA assumption failed while CT are the recommended alternative when LDA’s assumptions are not met. Though CT’s performance is likely lower than that of LDA, it offers many advantages that make it a useful method, such as its lack of data assumptions.2015-10-06T16:45:34-05:00http://pjsor.com/index.php/pjsor/thesis/view/34Investigation of Socio-Economic Risk Factors Associated with Drug Abuse in Faisalabad Division2015-09-30T14:31:32-05:00Afshan RiazMathematics and Statistics, Agriculture University Faisalabad<br />September, 2015<br /><br />Since the late seventies Drug abuse had been spreading in Pakistan at a fast rate. Now it has become a serious problem perhaps which is going to stay. In general drug abuse was rapidly growing in Pakistan and in South Asia due to human development problems such as poverty; lack of basic health care and illiteracy. There were three different categories of Drugs abuse 1) Depressants in which heroin and barbiturates were included 2) Stimulants in which cocaine amphetamines were included 3) Hallucinogens and other were inhaled, smoked, injected or snorted. The aim of study was to investigate the risk factors that were associated with drug abuse in Faisalabad division. Logistic regression technique was used for analysis of the data. Finally, on the basis of analysis of random sample of 352 patients, it was concluded that Age , Marital status, aggressive behaviour, Income and the educational level are most effected factors associated with drug abuse. Key words: Drug abuse, Risk factors, Logistic regression, Faisalabad.2015-09-30T14:31:32-05:00http://pjsor.com/index.php/pjsor/thesis/view/33Trou spectral pour un système désordonné de gaz coloré2015-09-25T15:58:34-05:00Halim ZeghdoudiMathematics, Badji-Mokhtar University- Annaba<br />September, 2010<br /><br />In this work we deal with spectral gap and canonical measures related to a model called colored disordered lattice gas. We consider the approach used in the work of Dermoune and Heinrich (cf: [9]). We suggest a new computation for the canonical measures. Also, we propose the explicit form of the spectral gap for colored disordered lattice gas of exclusion processes which plays an important role in the study of hydrodynamic limit.2015-09-25T15:58:34-05:00http://pjsor.com/index.php/pjsor/thesis/view/32Rank Set Sampling in Improving the Estimates of Simple Regression Model2015-08-25T02:59:14-05:00M IQBAL JEELANI BHATDivsion of Agricultural Statistics, SKUAST- Kashmir, SKUAST-Kashmir<br />January, 2015<br /><br />The present study was carried out on Rank set sampling with a view of increasing the efficiency of estimate of population mean. The basic premise for ranked set sampling (RSS) is an infinite population under study and the assumption that a set of sampling units drawn from the population can be ranked by certain means rather cheaply without the actual measurement of the variable of interest which might be costly and/or time-consuming. The essence of RSS is similar to the classical stratified sampling. RSS can be considered as post-stratifying the sampling units according to their ranks in a sample. In present study simple linear regression models were considered with respect to samples taken from the identified sampling techniques like simple random sampling (SRS), systematic sampling (SYS) and rank set sampling (RSS). It was found that the coefficient of determination obtained from regression model based on rank set sample was higher than rest of two sampling schemes. Root mean square error, p values, coefficient of variation were much lower in rank set based regression model than others. Kernel density curves were more symmetric in case of rank set sample as compared to SRS and SYS. Using resampling technique (Jacknifing) there was consistency in the measure of R2, Adj R2 and RMSE in case of RSS as compared to SRS and SYS. Ranked set sampling is introduced with in the frame work of stratified sampling. Rather than selecting a simple random sample within each stratum as is done in stratified simple random sampling (SSRS), a ranked set sample within each stratum is taken. From the simulation results it is concluded that RSS, when used in place of SRS in the final stage of stratified sampling, can provide considerably more accurate estimates of population means. New ratio estimators for RSS are made based on various combinations of known values of deciles, Median, Quartile deviation, coefficient of Skewness, Kurtosis, and Correlation coefficient of auxiliary variable were introduced. The modified ratio estimators were more efficient than classical ratio estimators, and from various simulation results it was found that the efficiency of RSS estimators decreases as the correlation coefficient decreases, the efficiency increases as the set size m increases. Population mean under non responses is also studied under rank set sampling. Some new allocation schemes were considered in order to study their effect on sampling variance and they were compared with the existing allocation schemes. In most of the situations under different combinations of non-response rate and inverse ratio of sub-sampled non-response class, allocation schemes depending solely upon the knowledge of stratum size, non-response rate, mean squares of non-response group produces more precise estimates as compared to proportional allocation and other allocations based on knowledge of response and non-response rate only. From the results it is concluded that in addition to the knowledge of strata sizes, the knowledge of non-response rates and mean squares among non-response groups while allocating sample to different strata, certainly adds to the precision of the estimate. Different computer programmes were prepared using R-software and analysis as per the objectives were carried out. In the preliminary study regression analysis and regression diagnostics is carried in SAS, while the simulation was carried out using the function library(mvtnorm) in R software. With the help of R -software new functions like drss(m,r), varwts(n,h), makeAlloc(n,m) and ratio.est(n,N(x,y))were developed. All these functions were run on real data set generated from forestry and horticultural crops.2015-08-25T02:59:14-05:00http://pjsor.com/index.php/pjsor/thesis/view/31On Some Aspects of Small Area Estimation with Bayesian Approach Utilizing SAS and R Software2015-01-16T02:21:30-06:00Nageena NazirStatistics, SKUAST-K<br />June, 2014<br /><br />The demand for reliable small area estimates derived from survey data has increased greatly in recent years due to their growing use in formulating policies and programs, allocation of government funds, regional planning, small area business decisions and other applications. Traditional direct estimates may not provide acceptable precision for small areas because sample sizes are seldom large enough in many small areas of interest. This makes it necessary to borrow information across related areas through indirect estimation based on models, using auxiliary information such as recent census data and current administrative data. Methods based on models are now widely accepted. The indirect estimates, obtained using implicit/explicit models, are usually more reliable than the direct survey estimates. To draw inferences from these models, one can use Bayesian or frequentist approach. Moreover, in almost all situations, the posterior moments involve multi-dimensional integration and consequently closed form expressions cannot be obtained. To overcome the computational difficulties one needs to apply computer intensive Monte Carlo Markov Chain (MCMC) methods. This work deals on some aspects of small area estimation with Bayesian approach using SAS and R-software’s. Direct, synthetic and composite estimators are obtained on real agricultural data set and results obtained from these estimators are compared in terms of average relative bias, average squared relative bias, average absolute bias, average squared deviation as well as the empirical mean square error. It has been found that composite estimator worked better than direct and synthetic estimators. Area level and unit level models are used to draw inferences for small areas when the variable of interest is continuous. New prior distributions are proposed and evaluated for both the models for the variance component. Laplace approximation is used to obtain accurate approximations to the posterior moments. Results from the two models are compared in terms of average relative bias, average squared relative bias and average absolute bias and numerical results obtained on a real agricultural data set highlight the superiority of using the proposed prior over the uniform prior. Also the basic linear mixed effects model is extended to allow heteroscedastic correlated within group errors. lme() function of nlme() library is used to fit the extended linear mixed effects model and its various capabilities are illustrated through examples. It has been shown that the estimation and computational methods of simple linear mixed effect models can be applied to the extended model and decomposition of variance, covariance structure of within group errors into two independent components: a variance structure and a correlation structure. The above discussed methods are illustrated practically with the help of SAS and R software on the basis of newly developed functions piest(), composite(), relativebias(), absolute bias(), Area.model.HBll(). Two functions were also developed in SAS to obtain the EBLUP and HB estimate of area level and unit level models. Both the functions consists of number of statements in SAS, utilizing number of SAS procedures viz PROC MIXED, PROC IML, PROC RANDOM, PROC MCMC, PROC PRINT. The lme() function of nlme library of R-software is used to fit the extended linear mixed effects models illustrating its various capabilities through examples on real data set. All these functions are run on real agricultural apple production data set obtained through pilot survey project in District Baramulla.2015-01-16T02:21:30-06:00http://pjsor.com/index.php/pjsor/thesis/view/30On The Generalized Kumaraswamy Distribution2015-01-16T02:21:05-06:00Mohamed Ali AhmedMathematical Statistics, Cairo University<br />June, 2012<br /><br />This Thesis focuses on the Kumaraswamy-Gumbel minimum distribution as a special distribution from the class of Kw-G distributions, also properties and parameter estimation methods of the Kumaraswamy - Gumbel minimum distribution are studied.2015-01-16T02:21:05-06:00http://pjsor.com/index.php/pjsor/thesis/view/28Improved Estimation Strategies for Coeﬃcients of Variation in m population: A simulation study2014-08-12T15:14:25-05:00Ahmad Ali FarooqiMathematics and Statistics, University of Windsor, Windsor, Ontario, Canada<br />October, 2007<br /><br />In this study, the problem of estimating the coeﬃcient of variation is considered when it is a prior suspected that all the coeﬃcients of variation may be close to each other. The combined data from all the samples leads to more eﬃcient estimator of the coeﬃcients of variation. We propose a basis for optimally combining estimation problems when there is uncertainty concerning the appropriate statistical model-estimator to use in representing the data sampling process. The objective here is to produce natural adaptive estimators. Based on our simulation study, we demonstrate that suggested shrinkage estimators outperform the benchmark estimator.2014-08-12T15:14:25-05:00http://pjsor.com/index.php/pjsor/thesis/view/26An EOQ model with time dependent Weibull Deterioration and Trended Demand2013-07-02T17:39:14-05:00Smaila Samuel SanniStatistics, University of Nigeria, Nsukka<br />December, 2012<br /><br />A single-item economic order quantity model is presented in which inventory is depleted mainly due to demand and partly due to deterioration. The rate of deterioration is taken to be time dependent, and the time to deterioration is assumed to follow three-parameter Weibull distribution, the demand rate is quadratic function and shortages are allowed in the inventory and are completely backlogged. The Weibull instantaneous rate function describes different situations of deterioration while the quadratic demand function depicts the various phases of market demand. We provide simple analytical tractable procedures for deriving the model and also establish the necessary and sufficient conditions for the optimal replenishment policy for the inventory model. Numerical examples are given to illustrate the solution procedure and sensitivity analysis is conducted to evaluate the responsiveness of the proposed model to changes in the model parameters.2013-07-02T17:39:14-05:00http://pjsor.com/index.php/pjsor/thesis/view/20Evaluation of the Risk Factors for Osteoporosis2012-02-09T12:21:59-06:00Madeeha ShahbazCollege of Statistical and Acturial Sciences, University of the Punjab, Lahore<br />January, 2012<br /><br /><b>An unmatched case control study</b> based on five hundred and eight subjects with forty and above years of age was chosen by systematic random sampling from Mayo Hospital Lahore, Pakistan during the time period Aug 26, 2011 to Nov 29, 2011. About two fifty four cases and two fifty four controls of male (total 121) and female (total 387) underwent quantitative ultrasonography with respect to defined criteria and were interviewed to find out the risk factors for osteoporosis. Bone mineral density (BMD) was assessed by the speed of sound using a quantitative ultrasound device. Binary logistic regression was used to determine the independent predictors of being osteoporotic. Educational Status, Family History, BMI Status, Anticoagulant Drugs, Rheumatoid Arthritis, Anorexia Nervosa, Menopausal Status, Premature Menopause, and Hysterectomy were resulted as significant risk factors associated with Osteoporosis. Osteoporosis is significant with illiterate people, positive family history of osteoporosis and those who have had BMI<22. The respondents who were used Heparin, Aspirin, Dispirin, Loprin (Anticoagulant Drugs) were greater risk of developing osteoporosis. The postmenopausal females were directly related with the risk of osteoporosis. The postmenopausal females were at greater risk of developing osteoporosis than premenopausal females. This is not surprising since bone mass is maintained by estrogen and the abrupt decline in the level of this hormone leads to bone loss. The females who have had premature menopause (before 45 Years of age) and those who removed their uterus were at higher risk to being osteoporotic. The diagnosis of osteoporosis at early stage can prevent the development of osteoporosis. OCP was found to be significantly protective. Gender, Age, Area of living, Lack of daily exercise, Lack of calcium and vitamin D intake, Excess salt intake, tea (caffeine), huqqa, alcohol, COPD, Asthma, Breast Cancer and Bone tumors were observed to be insignificant.2012-02-09T12:21:59-06:00http://pjsor.com/index.php/pjsor/thesis/view/19Statistical Inference for the Simple and Mixture of Laplace Distribution via Bayesian Approach2011-10-28T14:10:18-05:00Sajid AliStatistics, Quaid-I-Azam University<br />September, 2010<br /><br />The endeavor of the current study is to explore the heterogeneous population using the Simple and two-component Mixture of Laplace Probability Distribution via Bayesian Approach when data is censored/ uncensored and can be used to model various real world problems. In the last few decades, there has been an emergent interest in the construction of flexible parametric classes of Probability Distributions in Bayesian as compared to Classical approach. Various forms of the skewed and kurtosis distributions have appeared in the literature for data analysis and modeling. In particular, various forms of the Laplace Distribution have been introduced and applied in several areas including medical science, environmental science, communications, economics, engineering and finance, among others. In this dissertation we consider Type-I Mixture of the Laplace Distribution. An inclusive simulation scheme including a large number of parameter points is followed to highlight the properties and performance of the estimates in terms of Sample Size, Censoring Rate (fixed failure Time), different types of Loss Functions and Proportion of the component of the Mixture using Informative and Noninformative Priors. A Type-IV sample consisting of ordinary Type-I, Right and Left Censored observations from a Type-I Mixture is considered. The proposed Informative Bayes Estimators emerge advantageous in terms of their least Posterior Risk. Bayesian Analysis of Real Life Mixture data is conducted as an application of proposed Mixture and some interesting observations and comparisons have been observed. A simulated Mixture data with censored observations is generated by Probabilistic Mixing for the computational purposes. Well-designed closed form expressions for the Bayes Estimators and their posterior risks are derived for the censored sample as well as for the complete sample. The complete sample expressions for the ML Estimates and for their Posterior Risks are defined and the components of the Information Matrix are constructed as well. As Laplace Distribution is well known Distribution and apply to various field of life; Mixture of this also considered in various fields but Bayesian Analysis of Simple and Mixture of Laplace Distribution as in this thesis is focused not considered early in Literature. An overall comparison of all the Mixture using various types of Informative and Noninformative Priors under different types of Loss Function is presented along with suggestions for possible future extension of this work.2011-10-28T14:10:18-05:00http://pjsor.com/index.php/pjsor/thesis/view/18Designs Robust to Neighbor Effects with Minimum Number of Blocks2011-10-05T11:49:45-05:00Rashid AhmedStatistics, The Islamia University of Bahawalpur<br />December, 2010<br /><br />In a large part of literature on the design of experiments, observations had been assumed to be uncorrelated. Herzberg (1982) realized that observations with correlated error structures were unavoidable and serious problems could occur by using conventional designs and analysis in such situations. Observations are correlated either because of the nature of the plots, the layout of the plots, some cumulative effects through time, pest infections from neighboring plots, etc. Experiments in agriculture, horticulture, forestry, serology and industry often show neighbor effects. Neighbor effects mean the response on a given plot is affected by the treatment on neighboring plots along with the treatment applied to that plot. The design strategy of a statistical experiment is influenced to a large extent by the nature of the neighbor effects. In the presence of neighbor effects, the designs robust to neighbor effects are highly desirable. Neighbor balanced designs ensure that treatment comparisons are least affected by the neighbor effects, therefore, neighbor balanced designs are robust to neighbor effects. In this study, efforts are made to construct one dimensional neighbor balanced designs. If the observations are affected only by the treatment applied to the adjacent neighbors then nearest (first order) neighbor balanced designs are robust to neighbor effects. Several algorithms are developed to construct such designs. In some situations the observations are affected not only to adjacent neighbors but also to neighbors that are second, third, , k-1 distance apart. Some algorithms are also proposed to construct second and higher order neighbor balanced designs. When neighbor designs require large number of blocks, partially neighbor balanced designs and generalized neighbor designs are desirable. Some algorithms are developed to generate these designs. Our proposed partially neighbor balanced save at least 70% experimental material by relaxing at most 20% property of neighbor balance. Number of blocks required for nearest neighbor balanced designs are also reduced by introducing extra treatment(s). Lastly, universally optimal designs are constructed.2011-10-05T11:49:45-05:00http://pjsor.com/index.php/pjsor/thesis/view/17A Study of Portals Evaluation Criteria and Types of Personality in E-Business Services2011-09-30T12:26:29-05:00Mai Mohammed ShoumanInformation System, Zagazig University<br />August, 2009<br /><br />The Internet has become one of the most commonly used technologies all over the world. Hence businesses can use this technology to help their customers to get the needed information and services electronically. Although the application of e-business has many benefits, most e-business applications lose money. For an e-business application to success, the business should have a web portal though which it will provide its products and services. Evaluation of web portals is an important topic in web engineering. The satisfaction of users is a primary goal for actors involved with the development and operation of a web portal. Different users can not be treated as the same due to the diversity of people which have different drives, abilities, and personalities. Four basic types of personality (temperaments) exists which are Choleric, Sanguine, Melancholy, and Phlegmatic. Each person has one master type of personality, however the other characteristics of other personalities can also be present but with lower effect and activation. This thesis has three aims the first one is to provide an integrated framework for web portals evaluation criteria. The proposed web portals evaluation criteria model is divided into eight criteria which were not grouped together before which involves web portal content, web portal design, web portal personalization, web portal community, web portal business issue, web portal search engine, web portal emotional issue, and web portal evaluation and development. The second aim is to rank the different web portals quality criteria based on the different types of personality which are choleric, melancholy, sanguine and phlegmatic. Each type of personality ranked the web portals evaluation criteria in a different manner. The third aim is to discuss the relationship between web portals personalization and the users’ satisfaction. This aim tries to put the users’ interaction with web portals in dynamic match based on personality features and the user satisfaction increased when applying the personalization to web portals.2011-09-30T12:26:29-05:00http://pjsor.com/index.php/pjsor/thesis/view/16On the Posterior Analysis for Simple and Mixture of Maxwell Distribution2011-09-30T12:25:46-05:00Syed Mohsin Ali KazmiStatistics, Quaid-i-Azam University Islamabad, Pakistan<br />July, 2010<br /><br />The effort of current study is to explore the heterogeneous population using the Bayesian analysis for simple and mixture of the Maxwell distribution when data is censored and uncensored. Various types of comparisons of prior distributions for the parameter of the Maxwell distribution and loss functions are illustrated. In this thesis we consider Type I mixture o the Maxwell distribution which is member of the subclass of the exponential family. The elegant closed form expressions for the Bayes estimators of the parameters of the mixture of the Maxwell distribution are presented along with the variances assuming uninformative ( Uniform and Jeffreys) and informative (Inverted Gamma, Inverted Chi Square) priors. An extensive simulation study is conducted for the mixture of the Maxwell distribution to highlight the properties and comparison of the proposed Bayes estimators in terms of sample sizes, censoring rates, mixing proportions and different combinations of the parameters of the components density. A Type-IV sample consisting of ordinary Type-I, right censored observation is considered. Bayesian analysis of the real life data set is conducted as an application of mixture and comparison is observed. The system of non-linear equations to evaluate the classical maximum likelihood estimates, the components of the information matrices are derived for the mixture of the Maxwell distribution through relevant algebra. Method of elicitation is used for the values of the hyperparameters of informative priors in both simple and mixture models and provides us more precise results as compare to the uninformative priors. As an extension to this work, for Simple density the comparisons of different loss functions are made. Moreover we have derived the limiting expressions for the Bayes estimators and their variances of the mixtures. The Inverse Transform method of simulation, and the computations involved are conducted using Minitab, Mathematica, SAS and Excel packages. At the end conclusion and further recommendation been drawn for the entire study.2011-09-30T12:25:46-05:00http://pjsor.com/index.php/pjsor/thesis/view/13Construction and Analysis of Lattice Designs for Variety Competition Response2011-09-29T15:04:22-05:00Muhammad NawazStatistics, The Islamia University of Bahawalpur<br />March, 2011<br /><br />The major task for the experimenters, involved in the optimization of process under study, has been the best use of limited resources like soil, seed, water, fertilizers etc. In crop experiments optimizing the use of limited resources (land, water, seed, capital etc.) has always been a major priority. To achieve maximum yield with limited cultivatable land resources, farmers have been tempted to grow two or more crops simultaneously in the same field which they thought will give a better overall yield than if sown separately. Different varieties when sown together (in mixtures) may show an increase or decrease in yield depending upon the environmental conditions like weather, plant density, mineral resources etc. Researchers on the other hand, have been helping farmers by developing innovative techniques for the same objective. Local conditions are important for plants, therefore, space between plants is a factor which effect the growth of the target plant. Since, there are limited resources (light, nutrients, environment, water etc.), plants will compete with their neighbours for these resources. Neighbouring plants can facilitate or depress the growth of the target plant. The accommodating behaviour of plants to their neighbours can be termed as positive competition. Similarly the dominating behaviour of plants and making the resources unavailable for the neighbouring plants can be termed as negative neighbour effect. While growing mixtures it is worth investigating how a plant of one variety will perform when surrounded by 0, 1, 2… numbers of immediate neighbouring plants of another variety. To address this problem, balanced designs on square and triangular lattices are explored. Chapter 1 is an introduction to the concept of competition, positive and negative competition, sole cropping and mixtures and literature review of balanced competition designs. In Chapter 2, the designs are built using the knowledge of combinatorics. Horizontally, self-buildable designs are constructed in complementary halves. Early sections of this chapter are essentially the extension of the work done by Zafar-Yab (1980). Achieving maximum test ratio is a matter of vital importance in competition designs. Using elementary balanced arrays, rules for building balanced designs and their extension to larger size designs are presented in order to achieve the goal of maximum test ratio. Relations of test ratio for buildable designs in both horizontal as well as vertical directions are developed. In the latter sections, a number of new first order balanced designs on square lattice are constructed. A class of cyclic balanced designs with ten testable hills is also introduced. In chapter 3, to avoid the complexities of combinatorics the computational power of modern computer technology is brought to bear on the construction of balanced designs. Computer software employing the constraints used in combinatorics approach can be developed to ensure that the results thus obtained are no different than the ones which would have been obtained combinatorially. Computer aided elementary balanced arrays and basic balanced designs are constructed for different hill plot arrangements on square lattice. Elementary balanced arrays on the arrangements of 31-, 45- and 44-hills are constructed, some of these arrays have very encouraging value of test ratio. In Chapter 4, two completely new families of second order balanced designs are discussed. The plants at different distances from the central plant are likely to produce different effects. If plantation is not at close proximity, the competition from the second order neighbours is negligible (mango or banana trees etc). However, for plantation at close proximity (cereals or vegetables etc) it seems unnatural to ignore second order neighbours, as they are not too far away to be neglected. Therefore, there are two different approaches to tackle such situations. Therefore, second order balanced designs are constructed using both approaches. Economical balanced designs are constructed to provide artificial light to monoculture plants in greenhouse experiments with the help of electric light bulbs. The construction of these designs is explained on the basis of knowledge of combinatorics. There are six isomorphic classes. They are initially constructed for five monoculture plants, and then the building of larger designs using these six designs is also explained. The first three sections of chapter 5 are devoted to the construction of first order balanced arrays on triangular lattice. As a result four, fifty four and ninety three new balanced arrays on the arrangement of 34 hills, 25 hills and 43 hills respectively are constructed. Also, a completely new family of second order balanced designs is discussed. In the last chapter, appropriate models for the designs constructed in the present study are proposed. It includes concluding remarks and future directions.2011-09-29T15:04:22-05:00http://pjsor.com/index.php/pjsor/thesis/view/11Paired Comparison Modeling with Bayesian Analysis2011-09-29T10:43:17-05:00Nasir AbbasStatistics, Quaid-i-Azam University Islamabad, Pakistan<br />May, 2010<br /><br />The method of paired comparisons is a technique in which the treatments or stimuli are presented in pairs to some respondents, who depending upon sensory evaluation, pick the better one. The experiment is repeatedly executed for all the treatments with a number of respondents to reach certain ranking of the treatments. In this study, an attempt is made to develop paired comparison models, namely the Pareto model and the Cauchy model. The Cauchy model is also extended to accommodate ties. Bayesian analysis is a modern inferential technique which endeavors to estimate parameters of an underlying posterior distribution based on a prior and an observed distribution. The developed models, along with the Stern (1990a) Chi-square model, are analyzed in Bayesian framework using noninformative and informative priors. The priors are compared on behalf of the Lindley-Shannon information. As an illustration, we use a real data set for the years 2000-2006 to rank five top-ranked one-day-international cricket teams, namely those of Australia, India, New Zealand, Pakistan and South Africa. The analysis comprises finding the posterior means, joint posterior modes, marginal posterior distributions, preference probabilities, predictive probabilities and posterior probabilities of hypotheses. The plausibility of the models is also tested. The models are also analyzed through the simulated data sets of different sizes, and the results are compared. The renowned Bradley-Terry model is also studied for comparison purpose. Extensive computer programs are developed in C, SAS, Microsoft Excel, Mathematica and Maple softwares to draw results. At last, the conclusions are drawn about the entire study.2011-09-29T10:43:17-05:00http://pjsor.com/index.php/pjsor/thesis/view/12Stochastic Particle Analysis For Evolutionary and Evolutionary-Like Algorithms2011-09-29T10:40:41-05:00Usama Hanafy Abou El-EnienMath Dep. Faculty of Science, Alexandria Uni. Egypt<br />October, 2010<br /><br />Genetic and clonal selection algorithms are considered. Three proposed algorithms are presented to obtain purely empirical analysis conclusions in order to turn to purely theoretical analysis results about the behavior of these algorithms as Markov chains, which confirm the conjectures from these experiments and in order to introduce a complete framework toward a new philosophy of MCMC methods and of statistical inference methods about Markov chains. First, we model genetic and clonal selection algorithms using Markov chains. Second, we carry on a particle analysis and analyze the convergence properties of these algorithms. Third, we propose the unified MCMC theorem and unique chromosomes method for a purely successful optimization of these algorithms.2011-09-29T10:40:41-05:00http://pjsor.com/index.php/pjsor/thesis/view/9Estimation for Longitudinal Survey Data under Informative Sampling2011-09-27T10:26:07-05:00Abdulhakeem Abdulhai EidehStatistics, Hebrew University of Jerusalem<br />July, 2003<br /><br />Survey data may be viewed as the outcome of two random processes: the process that generates the values of a random variable for units in a finite population, often referred to as the superpopulation model, and the process of selecting the sample units from the finite population values, known as the sample selection mechanism. Analytic inference from survey data refers to the superpopulation model. When the sample selection probabilities depend on the values of the model response variable, even after conditioning on auxiliary variables, the sampling mechanism becomes informative and the selection effects need to be accounted for in the inference process. Pfeffermann, Krieger and Rinott (1998) defined and studied the properties of the sample distribution under informative sampling - the distribution of the sample measurements given the selected sample. Later in a series of papers Pfeffermann and Sverchkov (1999, 2001, and 2003), Pfeffermann, Moura and Silva (2001), Chamber, Dorfman and Sverchkov (2003) used this sample distribution to fit a simple linear regression model, for prediction of a finite population total, to fit a generalized linear model, to fit a multilevel model, and to fit a nonparametric regression model, under informative sampling. In this thesis we treat the following four areas of analytic inference for longitudinal complex survey data under informative sampling: 1. Methods of Analytic Inference of Complex Survey Data Uunder Informative Sampling: In this area we deal with the methods for dealing with the effect of unequal probability of selection and informative sampling. These methods are: probability weighting, pseudo likelihood estimation and the more recently proposed one based on the concept of using the sample distribution for inference under informative sampling. We consider the relationships between the sample and different populations distributions – exponential family, binomial, polynomial regression model and multivariate normal, under different possibilities for modelling of the conditional expectation of sample inclusion probabilities – linear, exponential, logit and probit. We discuss in detail the two-step estimation method for estimating the parameters of the population model based on the sample model - in the first step we estimate the informativeness parameters (the unknown parameters in the conditional expectation of sample selection probabilities) using least squares method of estimation, in the second step we plug in the estimates of the informativeness parameters in the sample likelihood function and then estimate the parameters of interest using classical inference procedures like maximum likelihood estimation method. We propose the Kullback-Leibler measure as a measure of distance between the sample and the population distributions. Finally we discuss the informative and response-biased sampling. 2. Fitting Autoregressive Model of Order One for Longitudinal Survey Data Under Informative Sampling In this area we consider three new methods of estimating the parameters of an autoregressive model of order one fitted to longitudinal data under complex survey design - unequal probabilities of selection with an informative sampling design. These methods are: weighting at the first time period and self-weighting for the remaining periods (WSML); the sample likelihood-based method under the linear model (SMLL); the sample likelihood-based method under the exponential model (SMLE); and the pseudo likelihood method (PWML). The WSML and SMLE and SMLL produce better estimators, in the sense of smaller relative root mean square error than the pseudo maximum likelihood (PWML) which is the estimator widely used for dealing with the problem of unequal probability of selection. The choice between the WSML and SML in some cases is not clear-cut. The sample distribution has the advantage of permitting the use of classical inference procedure like likelihood principle and can be used for prediction (see Two-Stage Informative Cluster Sampling – Estimation and Prediction), which is not the case for the other methods. Also we find that the effect of informative sampling decreases over time. Another important finding from the simulation results relates to the sensitivity analysis of the estimators to departures from the assumed model. We were surprised to find that the sample distribution is not too sensitive to the modelling of the conditional expectation of the first order sample inclusion probabilities. Also we find that the conventional t-statistic and Kullback-Leibler information statistic for testing sampling ignorability perform well under both informative and noninformative sampling designs. 3. Two-Stage Informative Cluster Sampling - Estimation and Prediction In this area we consider a new method of estimating the parameters of the superpopulation model for two-stage cluster sampling from a finite population when the sampling design for both stages is informative. Also we extend the pseudo maximum likelihood estimation to two-stage cluster sampling. The sample ML estimation method produces better estimators in the sense of smaller bias than the classical ML and PML. In addition to the estimation problem we introduce new predictors of the cluster-specific effects, for sample and non-sample clusters, the prediction of the finite population total, and the prediction of cluster totals for sample and non-sample clusters. These new predictors take into account the sampling design at both stages-unequal probability of selection and the informativeness sampling. The main feature of the present predictors is their behaviours in terms of the informativeness parameters. The effects of the first and second stages are in opposite direction. The first stage increases or decreases the BLUP while the second stage deceases or increases the BLUP, depending on the sign (positive or negative) of the informativeness parameters for both stages. Also the use of the BLUP that ignores the informative sampling design at any stage yields biased predictors. 4. Model-Based Analysis of Labour Force Survey Gross Flow Data Under Informative Nonresponse In this area we introduce alternative methods of obtaining weighted estimates of gross flows, taking into account informative nonresponse. The first method based on extracting the response labour force model as a function of the population labour force model and of the response probabilities, which are obtained as reciprocals of the adjusted calibrated weights. The second method is based on the binomial logistic regression. The new methods are model based while the classical method is based on the adjusted weights. We think that the first method is more efficient than the weighted method. However the two methods, sample likelihood and weighting, give approximately the same estimates of labour force gross flows. Also we consider exponential model to explain the variations in the calibrated weights under household level and from this model we conclude that the unemployed persons at both quarters are under-represented in the labour force survey sample. The interesting result in this area is that if we have sample data that contains the response variable and the sampling weights and for nonresponse the calibrated adjusted weights, then basing inference using classical weighted method and the new method based on the response likelihood may give similar results.2011-09-27T10:26:07-05:00http://pjsor.com/index.php/pjsor/thesis/view/8Goodness of Fit Tests and Lasso Variable Selection in Time Series Analysis2011-09-26T12:36:48-05:00Sohail ChandSchool of Mathematical Sciences, University of Nottingham<br />January, 2011<br /><br />This thesis examines various aspects of time series and their applications. In the first part, we study numerical and asymptotic properties of Box-Pierce family of portmanteau tests. We compare size and power properties of time series model diagnostic tests using their asymptotic χ2distribution and bootstrap distribution (dynamic and fixed design) against various linear and non-linear alternatives. In general, our results show that dynamic bootstrapping provides a better approximation of the distribution un-derlying these statistics. Moreover, we find that Box-Pierce type tests are powerful against linear alternatives while the CvM due to Escanciano (2006b) test performs better against non linear alternative models. The most challenging scenario for these portmanteau tests is when the process is close to the stationary boundary and value of m, the maximum lag considered in the portmanteau test, is very small. In these situations, the Chi-Square distribution is a poor approximation of the null asymptotic distribution. Katayama (2008) suggested a bias correction term to improve the approximation in these situations. We numerically study Katayama’s bias correction in Ljung and Box (1978) test. Our results show that Katayama’s correction works well and confirms the results as shown in Katayama (2008). We also provide a number of algorithms for performing the necessary calcu- lations efficiently. We notice that the bootstrap automatically does bias correction in Ljung-Box statistic. It motivates us to look at theoretical properties of the dynamic bootstrap in this context. Moreover, noticing the good performance of Katayama’s correction, we suggest a bias correction term for the Monti (1994) test on the lines of Katayama’s correction. We show that our suggestion improves Monti’s statistic in a similar way to what Katayama’s suggestion does for Ljung-Box test. We also make a novel suggestion of using the pivotal portmanteau test. Our suggestion is to use two separate values of m, one a large value for the calculation of the information matrix and a smaller choice for diagnostic purposes. This results in a pivotal statistic which automatically corrects the bias correction in Ljung-Box test. Our suggested novel algorithm efficiently computes this novel portmanteau test. Inthe second part, we implement lasso-typeshrinkage methodsto linear regression and time series models. We look through simulations in various examples to study the oracle properties of these methods via the adaptive lasso due to Zou (2006). We study consistent variable selection by the lasso and adaptive lasso and consider a result in the literature which states that the lasso cannot be consistent in variable selection if a necessary condition does not hold for the model. We notice that lasso methods have nice theoretical properties but it is not very easy to achieve them in practice. The choice of tuning parameter is crucial for these methods. So far there is not any fully explicit way of choosing the appropriate value of tuning parameter, so it is hard to achieve the oracle properties in practice. In our numerical study, we compare the performance of k-fold cross-validation with the BIC method of Wang et al. (2007) for selecting the appropriate value of the tuning parameter. We show that k-fold cross- validation is not a reliable method for choosing the value of the tuning parameter for consistent variable selection. We also look at ways to implement lasso-type methods time series models. In our numerical results we show that the oracle properties of lasso-type methods can also be achieved for time series models. We derive the necessary condition for consistent variable selection by lasso-type methods in the time series context. We also prove the oracle properties of the adaptive lasso for stationary time series.2011-09-26T12:36:48-05:00http://pjsor.com/index.php/pjsor/thesis/view/7Intelligent GIS-based Decision Support System for Bus Routing2011-09-26T12:29:19-05:00AbdelMonaem Fouad AbdAllahInformation system, Zagazig<br />January, 2010<br /><br />Routing problems are complex problems; they relate to transportation networks, and range from finding shortest path (s) between two locations (path finding problem) to con-structing a complete tour among some locations in a network (tour construction problem). Bus Routing Problems (BRPs) are very important and sophisticated problems; they are attracting the attention of both industry and research community. Bus routing is among the major problems; bus transportation needs to be safe, reliable and efficient. BRP is NP-hard problem. Finding an optimal solution to an NP-hard problem is usually very time consuming or even impossible. This thesis develops a spatial decision making framework which helps in determining the most near optimal routes. This framework integrates strengthens of Geographic Information System (GIS), clustering, network cutting, Ant Colony Optimization (ACO) (metaheuristic algorithm), and Iterated Lin-Kernighan (ILK) (local improvement algorithm). GIS has the ability for geographic processing such as for presenting network data. Clustering has the ability to cluster the pickup locations into groups based on their count and bus capacity. Network cutting has the ability for cutting network data by pickup locations’ boundaries for limiting the search area within the network for more better and rapid solution. Also, the integrated component of ACO metaheuristic algorithm with ILK local improvement algorithm has the ability for solving routing problem(s). Also, this research proposed a system for implementing that framework, which consists of ArcGIS as GIS component, cluster algorithm based on the pickup locations, their count and bus capacity, network cutting that cuts the network based on pickup locations’ Xs and Ys boundaries, and an integrated part of ACO and ILK.2011-09-26T12:29:19-05:00http://pjsor.com/index.php/pjsor/thesis/view/6Bayesian analysis of Generalized Linear Models with S-PLUS and R-Softwares2011-09-26T11:12:11-05:00Malik Masood HassanDiv. of Statistics, SKUAST-K, SKUAST-k<br />September, 2009<br /><br />The term Bayesian refers to Reverend Thomas Bayes. The foundation of Bayesian logic is Bayes’ theorem. Bayes’ theorem provides a vehicle for changing or updating, the degree of belief about a parameter in light of more recent information. It is a formal procedure for merging knowledge obtained from experience termed as a prior with the information we get from data termed as likelihood. These two sources of information are combined together to form posterior density. In this methodology, investigators are mainly concerned with construction of posterior density and once posterior density is constructed, every important aspect of Bayesian analysis is supposed to be completed. Obtaining posterior inference using more than one method is an excellent way to debug computer programs and ensure that the results are accurate. Therefore, in this thesis we have implemented analytic approximations along with the simulations tools that are Normal and Laplace approximations to investigate the posterior densities analytically. The Markov Chain Monte Carlo (MCMC) techniques have been used throughout the thesis to bypass the computation of integrals of posterior distribution and to work out the comparison of their respective posterior densities obtained from analytical tools. These posterior densities are constructed throughout the thesis which contain all sort of information required for Bayesian modeling. The above techniques are illustrated through generalized linear models especially probit, logit and complementary log-log models. Practical illustrations have been made with the help of S-PLUS and R softwares through newly developed functions logitPostNI, logitPostcau, logitPostgamma, ProbitPostNI, ProbitPostcau, ProbitPostgamma, and bayes.summary. Several inbuilt functions of R and S-PLUS like MCMCprobit, mcmcsamp of MCMCpack and lme4 library, respectively were also used to obtain the posterior densities for fish breeding data and that of hierarchical generalized linear model. The hierarchical generalized linear models were also fitted by lmer function of lme4 library of Douglas Bates (2007) along with glmmpql function of MASS library. All the programmed functions as well as existing functions were run on dose response data of Bliss and the Venturia inequillis data. This venturia inequilis data is a real data set generated on venturia inequillis (causal organism of apple scab) in year 2007-08 tried at three different locations, with four different chemicals and six doses. Key words : Bayesian analysis, Posterior density, Probit model, Logit model, Complementary log-log model, Model comparison, Normal approximation, Laplace approximation, Metropolis algorithm and Hierarchical model.2011-09-26T11:12:11-05:00http://pjsor.com/index.php/pjsor/thesis/view/4Sampling with Unequal Probabilities and Without Replacement2011-07-27T16:03:58-05:00Muhammad Qaiser ShahbazStatistics, NCBA&E<br />May, 2003<br /><br />After describing the basic theory of survey sampling with reference to equal and unequal probability sampling, some selected selection procedures have been discussed, which can be used with Horvitz and Thompson estimator. Some of the popular estimators of population total (other than the Horvitz–Thompson estimator) have been discussed. The Model based sampling inference has been presented along with the famous model based estimators. Some approximate formulae for variance of the Horvitz–Thompson estimator that use only the first order inclusion probabilities have been obtained. Some special cases of these approximations have also been given. Three new selection procedures for use with Horvitz–Thompson estimator have been developed. These selection procedures are applicable for a sample of size two and are strictly without replacement. Some fundamental results related to inclusion probabilities and joint inclusion probabilities have been verified for these newly developed selection procedures. Empirical study for these new selection procedures have been carried out in order to see their performance for various types of populations. The regression analysis has been carried out in order to see the effect of coefficient of variation and correlation coefficient on the variance of these estimators. It has been found that these two coefficients have significant effect on the variance of Horvitz–Thompson estimator under the newly developed selection procedures. A general procedure has been developed by introducing a constant in the revised probabilities of selection that helps in developing a number of other selection procedures. It has been found that the Yates–Grundy draw-by-draw (1953) and the Brewer (1963a) procedures are special cases of the general selection procedure. Empirical study has been conducted to obtain a suitable value of the constant for various sorts of populations. A series of modified Murthy estimators has been developed by using the general Murthy (1957) estimator. These estimators have been developed by using various selection procedures in the general Murthy (1957) estimator. It has been found that the estimator used by Durbin (1953) for his rejective procedure is a special case of Murthy (1957) estimator under the Durbin (1967) draw-by-draw procedure. The unbiasedness of the new estimators has been verified and their design-based variances have been obtained. Empirical study has been carried out in order to see the performance of the new estimators. The model based study of the modified Murthy estimator under the Durbin (1953) draw-by-draw procedure has been conducted and it is found that this estimator achieves the Godambe–Joshi (1965) lower bound to the variance of any estimator in unequal probability sampling.2011-07-27T16:03:58-05:00http://pjsor.com/index.php/pjsor/thesis/view/5Bayesian Modelling for small area estimation with S-Plus and R-software2011-07-27T15:57:40-05:00Nageena NazirAgricultural Statistics, Sher-e-Kashmir University of Agricultural Sciences and Technology Kashmir<br />December, 2007<br /><br />A B S T R A C T Bayesian approach is an approach to statistics which formally seeks use of the prior information and Bayes’ theorem provides basis for using this information in a formal manner. Consequently, study of different features of posterior density of the parameter of interest is mainly required. Bayesian data analysis is the process of fitting a probability model to a set of data, by taking the joint posterior distribution of all observable and unobservable quantities in the analysis, conditioning on observed data, calculating and interpreting the appropriate posterior distribution and finally, evaluating the fit of the model so that conclusions drawn are reasonable. This thesis deals with the Bayesian modelling for small area with S-PLUS and R software. Intercept model, two sample test, correlation analysis, regression model, analysis of variance and Hierarchical Bayes model were fitted for the analysis and it has been shown that posterior distribution of mean of intercept model follows normal distribution when variance is known and Student’s t-distribution when variance is unknown. The posterior distribution of comparison of means in two sample test follows t-distribution. The posterior distribution of regression coefficient follows multivariate normal distribution when σ2 is assumed known, and each component of follows univariate normal distribution. In contrast, when σ2 is assumed unknown normal density is replaced by multivariate t-distribution for regression coefficient vector and marginal posterior for each of the components, that is , is univariate Student’s t-distribution. These posterior densities are constructed throughout the thesis which contain all sort of information required for Bayesian modelling for small area. It is illustrated practically with the help of S-PLUS and R softwares on the basis of newly developed functions postNorm(), predictDist(), postMean(), postVar(), posteriorMean(), postMeanDiff(), postDelta(), postcorr2(), and postRegb(). In analysis of variance several inbuilt function of R and S-PLUS and used to obtain the multiple comparison of means, 95% highest posterior density region and contrast comparisons. Relationship of hierarchical Bayes and Henderson’s mixed model methodology are discussed assuming multivariate normal distributions for fixed and random effects. This model has been fitted by lme() function of nlme library due to Pinheiro and Bates (2000). All these function are run on real date set generated on potato crop (Solanum tuberosum) in year 2005-06 at five different locations with 12 different genotypes.2011-07-27T15:57:40-05:00http://pjsor.com/index.php/pjsor/thesis/view/3An Efficient Duplicate Address Detection Scheme For Micro-Mobility Handovers in HMIPv6 Networks2010-07-30T14:40:25-05:00Muhammad WasimComputer Science, International Islamic University, Islamabad<br />July, 2010<br /><br />The Hierarchical Mobile IPv6 protocol has been proposed as an improved technology of MIPv6 to solve the problem of handover management mechanism between macro-mobility and micro-mobility, by introducing a new entity called Mobility Anchor Point (MAP). Whenever a Mobile Node (MN) roams into a new MAP domain, it needs to configure two Care-Of-Addresses (CoAs): A Regional Care-Of-Address (RCoA) on the MAP link and an On-link Local Care-Of-Address (LCoA). Each time when a MN visit a New Access Router (nAR) in a MAP domain a Duplicate Address Check (DAD) is performed on LCoA to verify the uniqueness of this address. For a fast moving MN within a MAP domain, the MN may undergo frequent handovers; therefore a majority of handover latency is occupied by DAD check of LCoA, by which handover efficiency has been affected badly. Longer handover latencies results in high packet loss which is almost unacceptable for real time applications. For such local movements within a particular MAP domain (Micro Mobility handovers), in this thesis we proposed a Less-Frequent Duplicate Address Detection (LF-DAD) scheme that reduces the frequency of DAD check while visiting different Access Routers (AR) in a MAP domain. We have evaluated the performance of the proposed scheme through extensive NS-2 simulation.2010-07-30T14:40:25-05:00http://pjsor.com/index.php/pjsor/thesis/view/2Comparative study of different types of Aspergillus Fungal Sinusitis2010-07-28T14:38:46-05:00Asif HanifCollege of Statistical and Actuarial Sciences, University of the Punjab<br />February, 2007<br /><br />When the body's immune system is suppressed, fungi find an opportunity to invade the body and a number of side effects occur. Because these organisms do not require light for food production, they can live in a damp and dark environment. The sinuses, consisting of moist, dark cavities, are a natural home to the invading fungi. When this occurs, fungal sinusitis will be in results. Methodology: Using a retrospective analytical study design we analyzed 29 cases that reported of fungal sinusitis (Invasive & Non-invasive) in ENT ward, Mayo Hospital Lahore. Results: Out of 29 patients, 11 patients were with Invasive Fungal Sinusitis while 18 patients reported as Non-invasive Fungal Sinusitis. The Aspergillus fungal sinusitis was more common in adolescent and young adults; the mean age at diagnosis was 28.59 years with Standard deviation 14.257. The minimum and maximum age was 10 and 70 years respectively. 62% patients reported as non-invasive fungal sinusitis and the remaining 38% were with invasive fungal sinusitis. The ratio of male patients was more in both types that is 65.5%. Non-invasive fungal sinusitis was most frequent in those patients who had the history of Nasal blockage, Asthma, Nasal polyps and rhinitis. Also Invasive fungal sinusitis was more common in those patients who represented the history of diabetes, T.B, facial and cheek swelling. Lastly, using Logistic regression it is concluded that the patients with the presence of T.B have more chances to get invasive fungal sinusitis. Key words: Odds ratio, exact logistic regression, fungal sinusitis2010-07-28T14:38:46-05:00http://pjsor.com/index.php/pjsor/thesis/view/1Sampling with Unequal Probabilities2010-06-23T14:18:30-05:00Nadeem Shafique ButtStatistics, NCBA&E<br />May, 2003<br /><br /><a href="http://nadeemshafique.web.officelive.com/documents/complete%20thesis.pdf">Full text (external site)</a><br /><br />A new estimator of population total has been developed following the method of Murthy (1957) by using the Shahbaz and Hanif (2003) General selection procedure. Two special cases have been obtained of the general estimator. Empirical study has been carried out to obtain the most suitable value of the constant involved.2010-06-23T14:18:30-05:00