An Investigation of Why Students Do Not Respond to Questions
Over the past decade, developers of the National Assessment of Educational Progress (NAEP) have changed substantially the mix of item types on the NAEP assessments by decreasing the numbers of multiple-choice questions and increasing the numbers of questions requiring short- or extended-constructed responses. These changes have been motivated largely by efforts to encompass the more complex learning outcomes being codified by new curriculum and assessment standards in a number of subject areas. That is, NAEP has attempted to align with widely endorsed recommendations for greater focus on the development and use of higher-order-thinking skills in instruction as well as assessments that better allow students to demonstrate such skills.
With the inclusion of short and extended constructed-response questions on the NAEP assessments, however, researchers have begun to notice unacceptably high student nonresponse rates. As a result, NAEP reports, analyses, and subsequent conclusions may be potentially confounded by the fact that large numbers of students are not answering some of the questions. Additionally, nonresponse rates seem to vary with student characteristics like gender and race, which may further impact the validity of NAEP conclusions.
In this study, we explored potential reasons behind student omission of responses to assessment questions. Understanding why students fail to answer certain questions may help inform the proper treatment of missing data during the estimation of item parameters and achievement distributions. It may also help test developers identify strategies for increasing response rates for particular types of questions or for particular groups of students.
The study was exploratory, small in scope, and qualitative in nature. The general approach was to visit schools where the 1998 eighth-grade national NAEP assessments in reading and civics were being conducted and interview samples of students about their test taking behaviors and their reasons for not answering particular questions following the assessment sessions. In our interviews we also attempted to determine whether the students could have correctly answered the questions they had left blank. This design was chosen over designs in which the students might take the assessment under more laboratory-like conditions in order to retain the demand characteristics of a typical NAEP assessment. In this way we hoped to obscure as little as possible of the contribution of motivation to NAEP nonresponse.