Due to Race to the Top and the No Child Left Behind waivers, 41 states have now elected to use Value-Added Measures or VAMs as a part of teacher evaluations. This is done, without regard to the limitations these statistical models have and without any supporting research that says doing so will increase student achievement. What are those limitations? In a recent post, the authors of Vamboozled, provided this post entitled "Top Ten Bits of VAMmuniton" that educators can use to defend themselves with research-based data against this massive non-research-based shift toward a model of teacher evaluation that will most likely do more to damage education than No Child Left Behind or any other education "reforms" of modern times.
I recently uncovered a journal article entitled "Sentinels Guarding the Grail: Value-Added Measurement and the Quest for Education Reform." which describes a rhetorical study by Rachel Gabriel and Jessica Nina Lester which examined the discourse during a meeting of the Tennessee Teacher Evaluation Advisory or TEAC from March 2010 through April 2011. TEAC was a 15 member panel appointed by the governor of Tennessee to develop a new teacher evaluation policy. The authors of this study examined the language used by those on this panel as they deliberated through the various components of a teacher evaluation policy.
What is interesting about this study is that the language employed by those in this meeting betray some important assumptions and beliefs about teaching, learning, testing, and value-added measures that aren't entirely supported by research or common sense.
According to Gabriel and Lester, Value Added Measurement became a sort of "Sentinel of Trust" and sort of a "Holy Grail" in measuring teacher effectiveness during these meetings in spite of all the research and literature that points to its limitations. According to the author's of this study, here's some of the assumptions those in this TEAC meeting demonstrated through the language they used:
1) Value-added measures alone defines effectiveness.
2) Value-added measures are the only "objective" option.
3) Concerns about Value added measures are minimal and not worthy of consideration.
As far as I can see, there is enormous danger when those making education policy buy into these three mistaken assumptions about value added measures.
First of all, VAMs do not alone define effectiveness. They are based on imperfect tests and often a single score collected at one point in time. Tests can't possibly carry out the role of defining teacher effectiveness because no test is even capable of capturing all that students learn. Of course, if you believe by faith that test scores alone equal student achievement, then sure, VAMs are the "objective salvation" you've been waiting for. However, those of us who have spent a great deal of time in schools and classrooms know tests hardly deserve such an exalted position.
Secondly, even value added measures are not as objective as those who push them would like to be. For example, the selection of which value added model to use is riddled with subjective judgements. Which factors to include and exclude from the model is a subjective judgment too. Choices of how to rate teachers using these requires subjective judgment as well, not to mention that VAMs are not entirely based on "objective tests" either. All the decisions surrounding their development, implementation and use require subjective judgment based on values and beliefs. There is nothing totally objective about VAMs. About the only objective number that results from value-added measures is the amount of money states pay consulting and data firms to generate them.
Finally, those who support value added measures often just dismiss concerns about the measures as not a real problem. They use the argument that VAMs are the "best measures" we've got currently as flawed as they are. Now that's some kind of argument! Suppose I was your surgeon, and used "tapping on your head" to decide whether to operate for a brain tumor because "tapping" was the best tool I've got? The whole 'its-the-best-we-have' argument does not negate the many flaws and issues and the potential harm using value-added measures have. Instead of dismissing the issues and concerns about VAMs, those who advocate for their use in teacher evaluations need to address every concern. They need to be willing to acknowledge the limitations, not simply discard them.
I offer one major, final caution to my fellow teachers and school leaders: it is time to begin really asking the tough difficult questions about the use of VAMs in evaluations. I strongly suggest that we learn all we can about the methodology. If anyone uses the phrase, "Well, it's too difficult to explain" we need to demand that they explain anyway. Just because something looks complicated does not mean its effective. Sometimes we as educators are too easily dazzled by the "complicated" anyway. The burden is on those who support these measures to adequately explain them and to support their use with peer-reviewed research, not company white-papers and studies by those who developed the measures in the first place.
No comments:
Post a Comment