As longitudinal data from large-scale assessments of academic achievement have become more readily available in the United States, educational policies at the state and federal levels have increasingly required that such data be used as a basis for holding schools and teachers accountable for student learning. If the test scores of students are to be used, at least in part, to evaluate teachers and/or schools, a key question to be addressed is how? To compare them with respect to the average test score levels of their students would surely be unfair, because we know that student test performance is strongly associated with variables that are beyond the control of teachers and schools—prior learning, the social capital of a child's family, and so on. The premise of value-added modeling is to level the playing field in such comparisons by holding teachers and schools accountable for only that part of the student learning that it would be possible for them to influence.