35 Characterizing sources of uncertainty in item response theory scale scores (Present by Sherry)

Connie's review

Connie's review

by HSU Chia Ling -
Number of replies: 0

Traditionally, the estimators of examinees’ based on item response theory (IRT) models ignore uncertainty carried over from the item calibration process. In turn, it leads to incorrect estimates of the standard errors of measurement (SEMs); furthermore, the incorrect decisions will be obtained when the incorrect estimates of SEMs were used, for example, SEMs used as the termination criterion in computerized adaptive testing. A number of approaches were proposed to address the problem which was caused by the uncertainty from the item calibration process. These approaches were used in two lines: (a) to obtain a corrected SEMs that take uncertainty in the item parameters into account and (b) to characterize the nature and impact of item parameter uncertainty on subsequent estimation and inference. This paper proposed a method (multiple imputation; MI) which combines the two lines mentioned above together. That is the method not only can provide a corrected SEMs that take uncertainty into account, but also can provide a confidence interval that characterize the nature and impact of item parameter uncertainty on subsequent estimation and inference. Through a simple data set consists of three-item for demonstrating the proposed method. Furthermore, the method was used for a real data and a simulation study.