RT Journal Article T1 Empirical phi-divergence test statistics for testing simple and composite null hypotheses A1 Balakrishnan, Narayanaswamy A1 Martin, Nirian A1 Pardo Llorente, Leandro AB The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the empirical modified likelihood ratio test statistic' provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures. PB Taylor & Francis SN 0233-1888 YR 2015 FD 2015-09-03 LK https://hdl.handle.net/20.500.14352/24168 UL https://hdl.handle.net/20.500.14352/24168 LA eng NO [1] Baggerly, K. A. (1998). Empirical likelihood as a goodness-of-fit measure. Biometrika, 85, 535–547.[2] Barnett, V. and Lewis, T. (1994). Outliers in Statistical Data. Third Edition. John Wiley & Sons, Chichester, England.[3] Basu, A., Shioya, H. and Park, C. (2011). Statistical Inference: The Minimum Distance Approach. Chapman & Hall/CRC Press, Boca Raton, Florida.[4] Bhattacharyya, A. (1943). On a measure of divergence between two statistical populations defined by their probability distributions. Bulletin of the Calcutta Mathematical Society, 35, 99–109.[5] Broniatowski, M. and Keziou, A. (2012). Divergences and duality for estimating and test under moment condition models. Journal of Statistical Planning and Inference, 142, 2554-2573.[6] Cressie, N. and Read, T. R. C. (1984). Multinomial goodness-of-fit tests. Journal of the Royal Statistical Society, Series B, 46, 440–464.[7] Ferguson, T. S. (1996). A Course in Large Sample Theory. Chapman and Hall, New York.[8] Fraser, D. A. S. (1957). Nonparametric Methods in Statistics. John Wiley & Sons, New York.[9] Gokhale, D. V. and Kullback, S. (1978). The Information in Contingency Tables. Marcel Dekker, New York.[10] H´ajek, J. and Sid´ak, Z. (1967). Theory of Rank Tests. Academic Press, New York.[11] Heritier, S. and Ronchetti, E. (1994). Robust Bounded-Influence Tests in General Parametric Models. Journal of the American Statistical Association, 89, 897–904.[12] Le Cam, L. (1960). Locally Asymptotic Normal Families of Distributions. Universality of California Press, Berkeley, California.[13] Menéndez, M. L., Morales, D., Pardo, L. and Salicrú, M. (1995). Asymptotic behavior and statistical applications of divergence measures in multinomial populations: A unified study. Statistical Papers, 36, 1-29.[14] Menéndez, M. L., Pardo, J. A., Pardo, L. and Pardo, M. C. (1997). Asymptotic approximations for the distributions of the (h, φ)-divergence goodness-of-fit statistics: Applications to Rényi’s statistic. Kybernetes, 26, 442-452.[15] Morales, D. and Pardo, L. (2001). Some approximations to power functions of φ-divergence tests in parametric models. TEST, 10, 249-269.[16] Owen, A. B. (1988). Empirical likelihood ratio confidence interval for a single functional. Biometrika, 75, 308-313.[17] Owen, A. B. (1990). Empirical likelihood confidence regions. The Annals of Statistics, 18, 90-120.[18] Pardo, L. (2006). Statistical Inference Based on Divergence Measures. Chapman & Hall/ CRC Press, Boca Raton, Florida.[19] Qin, J. and Lawless, J. (1994). Empirical likelihood and general estimating equations. The Annals of Statistics, 22, 300-325.[20] Ragusa, G. (2011). Minimum Divergence, Generalized Empirical likelihoods, and Higher Order Expansions. Econometric Reviews, 30, 4, 406-456.[21] Rényi, A. (1961). On measures of entropy and information. Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, 1, 547-561.[22] Schennach, S. M. (2007). Point Estimation with Exponentially Tilted Empirical Likelihood. The Annals of Statistics, 35, 634–672.[23] Stigler, S. M. (1973). Simon Newcomb, Percy Daniell, and the history of robust estimation, 1885-1920. Journal of the American Statistical Association, 68, 872-879.[24] Sharma, B. D. and Mittal, D. P. (1997). New non-dditive measures of relative information. Journal of Combinatorics, Information & Systems Science, 2, 122-133.[25] Toma, A. (2009) Optimal robust M-estimators using divergences. Statistics & Probability Letters, 79, 1–5.[26] Toma, A. (2013) Robustness of dual divergence estimators for models satisfying linear constraints. C. R. Acad. Sci. Paris, Ser. I, 351, 311-316.[27] van der Vaart, A. W. (2000). Asymptotic Statistics. Cambridge University Press, Cambridge.[28] Voinov, V., Nikulin, M. S. and Balakrishnan, N. (2013). Chi-Squared Goodness of Fit Tests with Applications. Academic Press, Boston. NO Ministerio de Economia y Competitividad (España) DS Docta Complutense RD 21 ago 2024