Traditionally, after calibrating a given dataset, only the point estimates of item parameters are implemented in later studies (i.e., CAT) regardless of collateral standard errors, suggesting the uncertainty of item estimates is not considered. So it could be expected that ignoring the uncertainty of item parameters will bring some bias to person estimates. This paper applied the multiple imputation (MI) strategy to take the uncertainty of item parameters into account, so that the impact on the standard error of measurement (SEM) can be evaluated. Relative to former analytic approximations, the proposed method is more efficient. Through a series of simulations, it was found that, in general, the impact of ignoring the uncertainty of item parameters was quite small unless the sample size was not large enough or an IRT model was inadequately fitted.
1. It is surprising to me how the generated parameters can be recovered accurately and why the values of SEs in Table 1 are so small. I guess some empirical priors are added in the program IRTPRO.
2. In figure 3, it can be found that few values of SEs for MI-based approach are slightly lower than those for traditional approach. How to interpret such kind of uncertainty?
3. Although the issue of measurement error is emphasized in many researches, does it still important in applications?