William Sanders (statistician)
William L. Sanders is an American statistician, a senior research fellow with the University of North Carolina at Chapel Hill. He developed the Tennessee Value-Added Assessment System (TVAAS), also known as the Educational Value-Added Assessment System (EVAAS), a method for measuring a teacher's effect on student performance by tracking the progress of students against themselves over the course of their school career with their assignment to various teachers' classes. This system has been used in Tennessee since 1993, and has been adopted by a number of other school districts across the United States. Sanders' approach has been used to support the theory that the quality of teachers is central to educational achievement. The Pennsylvania and New Hampshire Departments of Education sponsor pilots, and the Iowa School Board Association sponsors his value-added work in that state. Battelle for Kids provides interpretation and use trainings for the SAS EVAAS services for the participating districts in Ohio.
“Using mixed model equations, TVAAS uses the covariance matrix from this multivariate, longitudinal data set to evaluate the impact of the educational system on student progress in comparison to national norms, with data reports at the district, school, and teacher levels." [1] The model focuses on academic gains rather than raw achievement scores.
Criticism
Dr. Ballou, in Lissitz (Ed.), 2005, "Value Added Models in Education: Theory and Applications," analyzed the TVAAS and determined that value added-assessment of teachers are fallible estimates of teacher contribution to student learning, stating that standard errors of value-added estimates are large. Author thinks that value added models are merely one useful tool that should be used as one of many assessments in a comprehensive system of evaluation.
Researchers from the RAND corporation studied Dr. Sanders' method and determined that his approach does not satisfactorily account for bias, cautioning that non-educational effects may be attributed by mistake to teachers, with no way of effectively determining the magnitude of the error.[2] Ballou (2002) and Kupermintz (2003) further support this claim, claiming that non-educational factors have a noticeable impact on the evaluation of teachers despite efforts to account for them in the model.[3]
The use of merit pay based on VAM has been discredited in articles by Dan Pink and more generally as a business practice in the Harvard Business review. The accuracy of VAM for evaluating individual teachers has been further discredited by the Economic Policy Institute and by mathematician John Ewing
See also
References
- Harvard Business Review, merit pay
- Review of strengths & weaknesses for various value-added assessment systems
- Dan Pink on merit pay
- Economic Policy Institute, Problems with the Use of Student Test Scores to Evaluate Teachers
- John Ewing Article, Mathematical Intimidation: Driven By the Data