In discussion, the author showed the effective sample size which considered the cluster size and the total sample size concurrently. In this function, the effective sample size of ordinary CFA which ignored the multilevel structure will be equal to total sample size. However, in the fallowing paragraph, the author said “the ordinary CFA utilizing the total sample size instead of the effective sample size is more susceptible”. It is a strange inference because utilizing the total sample size or the effective sample size should not be difference in ordinary CFA.
In study 1, see the difference between the Multilevel and ordinary CFA, we knew the type I error and power were both larger in ordinary CFA than Multilevel CFA. Whether this result was caused by the underestimate of standard error in the coefficients? When we ignore the structure of multilevel, the underestimate of standard error could be expected. This trend also can be observed in study 2.
The higher ICC and CS lead to the higher detection rate of invariance no meter in type I error and power. Whether the causal problem was the method of detection? In this article, I didn’t find the iteration process for detecting the invariance. May be we can try to add the iteration process.