No doubt, this is a meaningful discussion to have. After all,
for states that have been working to incorporate student growth in teacher evaluations seems reasonable, so long as the intent behind including a measurement of student academic growth remains. The point is that effective teachers clearly have a positive impact on student learning and achievement.
Unfortunately, the singular focus on 50 percent versus 30 percent takes attention from equally important issues. The more critical question than what is the precise right weight for student growth is whether all the evaluation components add up to the full picture of teacher performance. If the weight of student growth is to be lowered, what will be measured in its place? The current proposal offers only vague answers.
The sole focus on the weight of testing by some has created two camps in the teacher evaluation debate: those who still argue to do away with measuring student growth altogether and those who believe we should no longer be asking, "Should student growth be part of teacher evaluations?" but rather, "How can student growth best be incorporated in teacher evaluations?"
For those who continue to challenge the use of student growth, the perfect should not be the enemy of the good. Researchers have never claimed that student growth measured by statistical tools like a student growth percentile model (used in Georgia) or value-added models (VAM) are perfect. But the information that these models add to a multiple measure system has the ability to strengthen the feedback teacher evaluations can provide.
Relevantly, following a report by the American Statistical Association critical of the use of VAM, economists Raj Chetty, John Friedman and Jonah Rockoff drafted a point-by-point response.
Chetty, Friedman and Rockoff point to studies that have shown these models directly measure teacher contributions toward student outcomes. Students of teachers with high value-added estimates were more likely to experience several positive outcomes in adulthood, like attending college and earning higher wages. Further, other studies found that when the model controls for students' prior test scores, it does indeed capture teachers' impacts on students.
In response to the argument that VAM scores can change substantially when a different model or test is used, Chetty and his research partners argue that when models account for students’ prior achievement, they produce similar results. The researchers readily admit that value added measures are not perfectly reliable, but also make the point that no measure is. When used along with other measures, value added estimates are far more reliable than traditional, far more subjective components of teacher evaluations, like observations.
As Georgia debates the nuances of 50 percent versus 30 percent weight on student growth, my hope is the Legislature also discusses these other, equally important questions, and that the conversation, in Georgia and nationally, further shifts from “should” to “how” when it comes to ensuring that student growth and achievement remains an integral part of the teacher evaluation process