Friday, November 10, 2017

What Happens When Schools and School Districts Use VAMs to Make Decisions about Teachers?

Many school administrators are using value-added measures in making decisions about teachers as if these statistical measures represent the latest, settled and unquestionable science. Those who do this are making a grave error. Despite companies such as SAS, who peddle their EVAAS data systems as the salvation of public education, the science behind VAMs is not settled, and there is even enough doubt about them, that the American Statistical Association issued a strong statement in 2014 against their use in decision-making when it comes to teachers. In that statement, ASA reminds educators that:
VAMs typically measure correlation, not causation---positive or negative---attributed to a teacher may actually be caused by other factors that are not captured in the model. (ASA Statement on VAMs)
Yet, administrators still use VAMs to infer that the teacher causes those scores. SAS, who owns the EVAAS model that North Carolina pays millions of dollars for each year, arrogantly claims that it accounts for all the factors that cause student performance on test scores, even when psychometric experts caution that this isn't possible.

In addition, administrators, who use VAMs to make decisions about teachers, should know better than confuse correlation with causation, but any time they base decisions about teacher status using VAMs, they are automatically assuming that teachers cause test results. If teachers operated in a lab where they controlled all the conditions of learning and the subjects of their learning, then one could perhaps better make this inference.

But there are other concerns about VAMs too. In a recent study by Shen, Simon, and Kelcey (2016), it was found that "using value-added teacher evaluations to inform high-stakes decision-making may not make for a good teacher." Using VAMs to decide the status of a teacher may not have the long-term impact administrators desire. These researchers also recommend that VAMs not be used "to inform disincentive high stakes decisions," which are any decisions regarding the professional status of teachers.

Ultimately, though, I can't help but wonder if those who are sold on using VAMs in administrative decision-making aren't caught up in chasing short-term gains in a measurement that lacks any meaningfulness in the long-term. VAMs aren't settled science. Yet, administrators use that data as if it were. Any decisions made using this data should be balanced with other data.

Shen, Z., Simon, C., & Kelcey, B. (2016). The potential consequence of using value-added models to evaluate teachers. eJournal of Education Policy, Fall 2016.

NOTE: My just completed dissertation was on the practice of using value-added measures to determine teacher effectiveness. My plan is to share over the next several weeks and months my own insights and personal thoughts on this practice. This is the first of may posts I plan to share on this topic. 

No comments:

Post a Comment