"But VAMs have fatal shortcomings. The chief complaint: they are statistically flawed. VAMs are unreliable, producing a wide range of ratings for the same teacher. VAMs do not provide any information about what instructional practices lead to particular results. This complicates efforts to improve teacher quality; many teachers and administrators are left wondering how and why their performance shifted so drastically, yet their teaching methods remained the same." Mark Paige, Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher EvaluationMark Paige's book is a quick, simple view regarding the problems with using value-added models as a part of teacher evaluations. As he points out, the statistical flaws are a fatal shortcoming to using them to definitively settle the questions regarding whether a teacher is effective. In his book, he points to two examples of teachers where those ratings fluctuated widely. When you have a teacher who rates "most effective" to "not effective" within a single year, especially when that teacher used the same methods with similar students, there should be a pause of question and interrogation.
Now, the VAM proponents would immediately diagnose the situation thus, "It is rather obvious that the teacher did not meet the needs of students where they are." What is wrong with the logic of this argument? On the surface, arguing that the teacher failed to "differentiate" makes sense. But, if there exists "universal teaching methods and strategies" that foster student learning no matter the context, then what would explain the difference? The real danger of using VAMs in the manner suggested by the logic of "differentiation" invalidates the idea that there are universally, research-based practices to which teachers can turn in improving student outcomes. What's worse, teaching becomes a game of pursuit every single year, where the teacher simply seeks out, not necessarily the best methods for producing learning of value, but instead, becomes, in effective a chaser of test results. Ultimately, the school becomes a place where teachers are simply production workers whose job is to produce acceptable test results, in this case, acceptable VAM results.
The American Statistical Association has made it clear. VAMs do not predict "causation." They predict correlation. To conclude that "what the teacher did" is the sole cause of test results is to ignore a whole world of other possibilities and factors that has a hand in causing those test results. Administrators should be open to the possibility that VAMs do not definitively determine a teacher's effectiveness.
If we continue down the path of using test score results to determine the validity and effectiveness of every practice, every policy, and everything we do in our buildings, we will turn out schools in factories whose sole purpose is produce test scores. I certainly hope we are prepared to accept along with that the life-time consequential results of such decisions.
NOTE: This post is a continued series of posts about the practice of using value-added measures to determine teacher effectiveness based on my recently completed dissertation research. I make no efforts to hide the fact that I think using VAMs to determine the effectiveness of schools, teachers, and educators is poor, misinformed practice. There is enough research out there to indicate that VAMs are flawed, and that there application in evaluation systems have serious consequences.
No comments:
Post a Comment