Please use this identifier to cite or link to this item:
|Title:||Corrections for criterion reliability in validity generalization: The consistency of Hermes, the utility of Midas|
|Keywords:||Interrater;Reliability;Validity generalization;Job performance;Ratings|
|Citation:||Revista de Psicologia del Trabajo y de las Organizaciones, 32(1): pp.17 - 23, (2016)|
|Abstract:||There is criticism in the literature about the use of interrater coefficients to correct for criterion reliability in validity generalization (VG) studies and disputing whether.52 is an accurate and non-dubious estimate of interrater reliability of overall job performance (OJP) ratings. We present a second-order meta-analysis of three independent meta-analytic studies of the interrater reliability of job performance ratings and make a number of comments and reflections on LeBreton et al.'s paper. The results of our meta-analysis indicate that the interrater reliability for a single rater is.52 (k = 66, N = 18,582, SD =.105). Our main conclusions are: (a) the value of.52 is an accurate estimate of the interrater reliability of overall job performance for a single rater; (b) it is not reasonable to conclude that past VG studies that used.52 as the criterion reliability value have a less than secure statistical foundation; (c) based on interrater reliability, test-retest reliability, and coefficient alpha, supervisor ratings are a useful and appropriate measure of job performance and can be confidently used as a criterion; (d) validity correction for criterion unreliability has been unanimously recommended by "classical" psychometricians and I/O psychologists as the proper way to estimate predictor validity, and is still recommended at present; (e) the substantive contribution of VG procedures to inform HRM practices in organizations should not be lost in these technical points of debate.|
|Appears in Collections:||Brunel Business School Research Papers|
Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.