Using State Assessments to Impute Achievement of Students Absent from NAEP: An Empirical Study in Four States
In preparing estimates for state-level statistics, NAEP has employed different standard procedures in dealing with missing data for absent and excluded students. To adjust for estimates of achievement for absent students, a basis is needed for imputing plausible achievement scores for those students. Ideally, one would use scores on a parallel test; in lieu of available test scores, NAEP has used demographic proxies for achievement. Demographic proxies are clearly less accurate than actual achievement scores on a related test, but until now, achievement test scores have not been systematically available for students selected for NAEP.
The purpose of this study is to estimate the extent to which state assessment scores can be used to improve the adjustments of NAEP data to remove the biases due to absences. A simulation study was conducted to explore the potential of state assessment scores to improve adjustments for nonparticipation (McLaughlin, Gallagher, and Stancavage, 2004). That study found that state assessment scores could potentially be more effective than demographic information in removing the bias related to absences. The present study aims to extend that simulation by empirically assessing the potential for using state assessment scores to impute achievement for NAEP absent students in four states. In these four states, state assessment scores were acquired for students selected to participate in the NAEP reading and mathematics assessments in 2003.
Four research questions have guided the course of this study:
- How well do state assessment scores cover absent students?
- Do state assessment scores follow the patterns of NAEP scores?
- How do results of adjustments for absences based on state test data compare to current demographic adjustments for absences?
- Is the use of state assessment data for this purpose feasible?