Thursday, January 2, 2014

Using Statistical Models to Predict Future Effectiveness of Teacher Candidates: A New Snake Oil Approach

It was bound to happen sooner or later. Two companies: TeachersMatch and Hanover Research have found a way to use statistics and test scores to develop a model that predicts how effective teacher candidates might be at raising test scores. These kinds of business ventures makes one wonder when the absurdity of this endless pursuit of  test scores is going to end.

In an article entitled, "Companies Offer Big Data Tools to Predict Teacher Candidate's Impact," by Education Week, Benjamin Herold describes how these two companies have developed "new algorithm-driven teacher selection tools" that can predict the impact that teacher candidates will have on student test scores. The companies, according to Herold, have already signed up nearly two dozen districts and charter organizations. There are definitely plenty of suckers out there. But Peter Dodge, founder and CEO of Hanover Research, stated clearly what is at the heart of this absurd business idea:
"Public schools are slowly being dragged into a more business-like state of mind."
Dodge concedes that he would like to enlist a couple hundred districts and obtain about $10 million in annual business. Never mind whether or not what he's selling is really good for education.

From my perspective as a veteran educator there is so much wrong with this kind of venture, I don't know where to begin. There is at the heart of this some faulty assumptions that would make me totally dismiss TeachersMatch and Hanover Research's "new product" as snake oil and poison that should be totally ignored.

  • Assumption that test scores are proxies for student achievement. Tests we currently use are imperfect and do not represent all that students should learn and be able to do. In fact, I am not of the faith when it comes to tests: I do not believe it possible to develop such tests. This "get-rich" scheme dreamed up by these two companies falsely assume test scores are the final say on student achievement.
  • Assumption that the ability to increase test scores equals good teaching. This is a simplified view of teaching that business leaders and economists have of teaching. Teaching and learning is much too complex to be reduced to this simple equation. It doesn't take much to raise test scores if you teach to the test. High test scores are not always an indicator of good teaching, unless you view teaching as primarily getting high test scores.
  • Assumption that schools are like businesses. This statistical money-making scheme dreamed up by TeacherMatch and Hanover Research smells and stinks of corporate education reform at its worst. It assumes that schools are businesses, which they never have been. Schools don't operate like businesses on so many levels, that this kind of thinking is actually detrimental.
Ultimately, we have the Obama administration and Secretary of Education Arne Duncan to thank for these kinds of education quackery. Because of Race to the Top, every economics and business quack have come forward peddling this kind of snake oil. Our federal education policy has enabled these kinds of schemes. What's worse, we as educators let their fancy, statistical arguments and savvy marketing persuade us that what they're selling really doesn't smell bad.  

We should know better. We should demand that these companies like Hanover Research and TeacherMatch, demonstrate the effectiveness of their products through independently, peer-reviewed research. We should ignore any marketing claims made by these snake oil salesmen, and look at the morality of using data in these ways too.

No comments:

Post a Comment