An item bank of previously calibrated item parameters is required in adaptive testing, and those item parameters are considered as the true values. Accordingly, test score or ability estimate and standard error of ability estimate for examinees can be obtained based on the item parameters known. However, there are only the estimates of item parameters can be obtained in practice. Thus, the estimate of latent trait and the corresponding standard error will adversely influenced by the ignored calibration error.
There are three major factors are expected to have an impact on capitalization on calibration error. First, the distribution of the errors in the estimated parameters in the item pool. That is, the larger the errors (or the smaller the calibration sample), the larger effects of the capitalization on the values of the item selection criterion in the adaptive algorithm. Second, the ratio of the test length to the item pool size; that is, the smaller the ration, the larger the likelihood of selecting items only from those with the largest estimation errors. Third, item selection criterion used.
This paper was demonstrated that how the impact of capitalization on estimation errors on ability estimation via different item selection criteria in adaptive testing. Furthermore, some strategies were introduced to reduce capitalization on error. Such as cross-validate optimization, control the composition of the item pool, imposing constraints on the item selection process, and using Rasch or 1-P model when the small calibration sample was used.