Showing posts with label value added. Show all posts
Showing posts with label value added. Show all posts

Thursday, April 10, 2014

Let the VAM Lawsuits Begin: Issues and Concerns with Their High-Stakes Use

Lawsuits against states using value-added models in making teaching evaluation decisions has begun in earnest. There are now three lawsuits underway challenging the use of this controversial statistical methodology and the use of test scores to determine teacher effectiveness. This increase in litigation is both an indication of how rapidly states have adopted the practice, and how these same states failed to address so many issues and concerns with the use of VAMs in this manner.

Two lawsuits have now been filed in Tennessee against the use of value-added  assessment, known as TVAAS as a part of teacher evaluation. The first lawsuit was filed against Knox County Schools in Tennessee by the Tennessee Education Association on behalf of an alternative school teacher who was denied a bonus because of her TVAAS ratings. (See “Tennessee Education Association Sues Knox County Schools Over Bonus Plan” ) In this case, the teacher was told she would receive system-wide TVAAS estimates because of her position at an alternative school, but 10 of her students were used anyway in her TVAAS score, resulting in a lower rating and no bonus. This lawsuit contests the arbitrariness of TVAAS estimates that use only a small number of teacher’s students to determine overall effectiveness.

In the second lawsuit, filed also against Knox County Schools, but also against Tennessee Governor Bill Haslam, state Commissioner of Education Kevin Huffman and the Knox County Board of Education, an eighth grade science teacher claims he was also denied a bonus unfairly after his TVAAS value-added rating was based on only 22 of his 142 students. (See “TEA Files Second Lawsuit Against KCS, Adds Haslam and Huffman as Defendents” ) Again, the lawsuit points to the arbitrariness of the TVAAS ratings.

A third lawsuit has been filed in Rochester, New York by the Rochester Teachers Association alleging that officials in that state “failed to adequately account for the effects of severe poverty, and as a result, unfairly penalized Rochester teachers on their Annual Professional Performance Review” or yearly teacher evaluations. (See “State Failed to Account for Poverty in Evaluations”). While it appears that this Rochester suit is disputing the use of growth score models not value-added, it also challenges the whole assumption and recent fad being pushed by politicians and policymakers of using test scores to evaluate teachers.

North Carolina jumped on the value-added bandwagon in response to US Department of Education coercion, and now the state uses its TVAAS version called EVAAS, or Educator Value Added Assessment System as part of teacher and principal evaluations. Fortunately, no districts have had to make high stakes decisions using the disputed measures so the lawsuit floodgate hasn't opened in our state yet, but I am sure once EVAAS is used to make decisions about employment, the lawsuits will begin. When those lawsuits begin, the American Statistical Association has perhaps outlined some areas of contention about the use of VAMs in educator evaluations in their ASA Statement on Using Value-Added Models for Educational AssessmentHere’s some points made by their position statement that clearly outlines the questions about the use of VAMs in teacher evaluations, a highly questionable statistical methodology.
  • VAMs (Value-added models) are complex statistical models, and high-level statistical expertise is needed to develop the models and interpret their results.” States choosing to use these models are trusting third-party vendors to develop them, provide the ratings, and they are expecting educators to effectively interpret those results. Obviously, there’s so much that can go wrong with the interpretation of VAM results, the ASA is warning that there is a need of people who have the expertise to interpret those results. I wonder how many of these states who have implemented these models have spent time and money training teachers and administrators to interpret these results, other than subjecting educators to one-time webinars or "sit-n-gets"?
  • “Estimates of VAM should always be accompanied by measures of precision and a discussion of the assumptions and possible limitations of the model. THESE LIMITATIONS ARE PARTICULARLY RELEVANT IF VAMS ARE USED FOR HIGH STAKES PURPOSES (Emphasis Mine).” I can’t speak for other states, but in North Carolina there has been little to no disclosure or discussion about the limitations of value-added data. There’s been more public relations, advertising, and promotion of the methodology as a new way of evaluating educators. They even have SAS promoting the methodology for them.The Obama administration has done this as well. The attitude in North Carolina seems to be, “We’re gonna evaluate teachers this way, so deal with it.” There needs to be discussion and disclosure about SAS’s EVAAS model and the whole process of using tests to evaluate teachers in North Carolina. Sadly, that’s missing. I can bet it’s the same in other states too.
  • VAMs are generally based on standardized test scores, and do not directly measure potential teacher contributions toward other student outcomes.” In other words, VAMs only tell you how students do on standardized tests. They can’t tell you all the other many, many ways teachers contribute to students’ lives. The main underlying assumption with using VAMs in teacher evaluations is that only test scores matter, regardless of what supporting policymakers say. While its true that the North Carolina Evaluation model does include other standards, how long will it take administrators and policymakers to ignore those standards and zero in on test scores because they are seen as the most important? The adage, "What gets tested, gets taught!" is true and "What get's emphasized the most through media and promotion, matters the most" is also equally true. When standard 6 or 8 is the only standard on the educator evaluation where an educator is "In Need of Improvement" then you can bet test scores suddenly matter more than anything else.
  • “VAMs typically measure correlation, not causation: Effects---positive or negative---attributed to a teacher may actually be caused by other factors that are not captured in the model.” There are certainly many, many things----poverty, lack of breakfast, runny noses---that can contribute to a student’s test score, yet there’s a belief that a teacher directly causes a test score to happen, especially by those pushing VAMs in teacher evaluations. The biggest assumption by those promoting VAMs in teacher evaluations is that the teacher's sole job or part of their job is the production of test scores. In reality, teaching is so much more complex than that, and those reducing it to a test score have probably not spent much time teaching themselves.
  • “Most VAM studies find that teachers account for about 1% to 14% of the variability in test scores, and that the majority of the opportunities for quality improvement are found in system-level conditions.” Yet in most states, educational improvement falls almost entirely on the backs of educators in the schools in the form of VAM-Powered Teacher Evaluations. There's little effort to improve the system. There’s no effort to improve classroom working conditions, provide professional development funding/resources, adequate material/resource funding. Instead of looking at how the system prevents excellence and innovation with its top-down mandates and many other ineffective measures, many states, including North Carolina and the Obama administration place accountability entirely and squarely on the backs of educators in the classrooms and schools. If the education system is broken, you don't focus on parts, you improve the whole.
  • “Ranking teachers by their VAM scores can have unintended consequences that reduce quality.” If all learning that is important can be reduced to a one-time administered-bubble-sheet test, then all is well for VAM and the ranking of teachers. But every educator knows that tests measure only a minuscule portion of important learning. Many important learning experiences can't even be measured by tests. But, if you elevate tests in a high stakes manner, then those results become the most important outcome of the school and the classroom. The end result is teaching to the test and test-prep where the test becomes the curriculum. Getting high test scores becomes the goal of teaching. If that’s the goal of teaching, who would want to be teacher? Elevating test scores through VAM only will escalate the exit of teachers from the profession and discourage others from entering it. because there's nothing fulfilling about improving student test scores. We didn't become educators to raise test scores; we became educators because we wanted to teach kids.
  • “The measure of student achievement is typically a score on a standardized test, and VAMs are only as good as the data fed into them.” Ultimately, VAMs are only as good as the tests administered to provide the data that feeds the model. If tests don’t adequately measure the content, or if they are not standardized or otherwise of high quality, then the VAM estimates are equally of dubious quality. When states try to scramble to create tests on the fly and do not develop quality tests, then the VAM estimates are of dubious quality too. North Carolina scrambled to create multiple tests in many high school, middle and elementary subjects just to have data to feed their EVAAS model. Yet, those tests and the process of their creation and field testing, even how they’re administered makes them questionable candidates for serious VAM use. VAMs require high-quality data to provide high-quality estimates. The idea that "any-old-test-will-do" is an anathema to VAMs which require quality test data.
The American Statistical Association position statement on using value-added models in educational assessment makes some supporting statements about their use too. They can be effectively used as part of the data teachers use to adjust classroom teaching. But when a state does not return those scores until October or later, its impossible to use that data to inform teaching, three months into the school year. Also, just getting a rating does little to inform teaching. Testing provides an opportunity for policymakers to provide teachers with valuable data to improve teaching. Sadly, the current data provided is too little and too late.

As the VAM-fed teacher evaluation fad and craze continues and grows, it is important for all educators to inform themselves about the controversial statistical practice. It is not a methodology without issues despite what the Obama administration and state education leaders say. Being knowledgeable about it means understanding its limitations as well as how to properly interpret and use such data. Don't wait for states and the federal government to provide that information: They are too busy promoting its use. The points made in the American Statistical Association’s Statement on Using Value-Added Models for Educational Assessment are excellent points of entry for learning more.

Saturday, January 4, 2014

More About Using Voodoo Value-Added Measures to Determine Teacher Candidate Quality

Two days ago, I posted about Teacher Match and Hanover Research, two companies that are now using value-added statistical modeling to predict the effectiveness of prospective teachers' abilities to raise test scores. ("Using Statistical Models to Predict Future Effectiveness of Teacher Candidates: A Snake Oil Approach") As I pointed out, there are some major flaws, especially flaws in the assumptions about teaching, in using this approach as even a part of a new teacher candidate selection process. Here's some more thoughts on this heinous practice:

1. It elevates standardized testing even higher in the decision-making processes for schools. This is using imperfect assessments to make decisions about whether a new teacher can raise test scores. States haven't done the validation studies to prove that the assumptions they make based on scores are valid. States develop tests on the cheap, or they purchase ready-made tests of questionable validity, and that were not designed for the purposes for which they are using them. Tests do not deserve this level of emphasis. This practice, by default, views raising test scores as the goal of good teaching.

2. It makes the hiring processes of schools and districts even more mysterious. In one district where Teacher Match is used, a source inside that district reported they have a major decrease in applicants because new candidates were being asked to submit to this mysterious process before being hired. I suspect this would be a big problem with any of these kinds of products. Besides, who wants to go into teaching to become the best test-score raiser in the business? These voodoo products will only make it harder to find teaching candidates not easier. It gives teachers the wrong message up front: your primary job is to raise test scores.

3. It is just another expensive drain on already short educational resources. One district contract with Teacher Match showed a district paying well over $30,000 per year for the service. In tight budgetary times when teachers are spending hundreds of dollars on their own school supplies, it amazes me that, morally, a district could justify spending this kind of money on a statistical gimmick. Districts are throwing more and more money into these statistical quackery schemes, when there are so many other pressing needs.

4. School districts, as I have witnessed many, many times in my 24 years as an educator, are purchasing products like Teacher Match, based entirely on the promises and marketing of the companies. Instead of accepting their word that their product will do what they say it will, they need to be forced to produce independent, peer-reviewed research. If they can't produce those studies, tell them to come back when they can. And, because I am not a firm believer that high test scores equals good teaching, they need to use measures other than test scores to prove their product is effective.

5. The fact that companies like Teacher Match and Hanover Research even exist in the education industry now is due to the Obama administrations' insistence of elevating test scores importance in everything a school does. This legacy will leave public education in worse shape than George Bush's No Child Left Behind. Arne Duncan and his Department of Education believe that data is data and any old data will do as long as it is "objective." This shows immediately that he and his cohorts do not have a clue about education. When you have non-educators like Duncan and half his Department of Education, you get these kind of detrimental approaches to education.

6. A major assumption behind Teacher Match and other statistical quackery products like it, is that schools can be operated like businesses, where their business is churning out high test scores. This assumption about public education is wrong. Because of current federal policy, public schools are being viewed even more like a business whose product is high test scores. That might be acceptable if your goal as an education system is to produce "high-quality test takers." What the education policy of President Obama and Arne Duncan is doing is destroying the culture of public education, test score by test score.

Teacher Match's Educator's Professional Inventory and Hanover Research's Paragon K12 are the latest in value-added voodoo products to be peddled to school districts. They will only serve to elevate the importance of test scores even higher than they already are. Districts even thinking about purchasing this snake oil should be ashamed of wasting limited education money on such products. There comes a time when you have to realize statistics aren't going to tell you everything what really need to know. Not everything can be reduced to numbers subject to statistical analysis. My fear is that some administrators who see test scores as the sole goal of their school are going to use this data to as the only basis of hiring someone. Can you imagine a profession where whether you can produce high test scores determines your entry, and whether you can keep producing those high test scores determine whether you can stay? That folks, is a factory model of educational delivery if I have ever heard of one!

Wednesday, December 25, 2013

D.C. Teachers Suffer Faulty Evaluations at Hand of Value-Added Measures: Is NC on Same Path?

I have made it known that I am no fan of using value-added measures in teacher evaluations. There's just too much room for error, and there's too many things that can go wrong, from the test to the calculations. Value-added calculations are done in a mysterious black-box and there is too little oversight and protection measures in place to ensure that the data is error-free. As the Washington Post reports here in "Errors Found in D.C. Teacher Evaluations," more than 40 teachers received incorrect teacher evaluations of the year 2012-2013. One teacher was even fired due to miscalculations. That is totally unaccepted and should not every happen.

Many states, including my  own, have adopted the "Value-added measure fad" without piloting or studying it at all, other than listening to the sales pitches and lobbying of companies peddling this methodology. In North Carolina, there is currently no recourse for challenging the scores either. If a teacher suspects their ratings are incorrect, there is no way to independently validate it. But if your goal is to implement corporate reforms measures, any mis-calculations and faulty teacher ratings are acceptable, as long as we implement the reform measure. According to an additional post on the Washington Post Web site, "D.C. Schools Gave 44 Teachers Mistaken Job Evaluations," it was faulty calculations "of the value that D.C. teachers added to student achievement in the last school year resulted in erroneous performance evaluations for 44 teachers, including one who was fired because of a low rating."

This incident illustrates clearly that value-added measures used in teacher evaluations are too error-prone and should be discarded. When education policy gets too caught up in numbers and statistics, people, whether teachers or students don't matter as much to the number-crunchers. The Obama administration should be ashamed of mandating this mistaken education policy too states to begin with. States who have implemented these measures need to immediately discard this statistical fad because it will ultimately do more to harm education than help. North Carolina needs to drop this fad too and begin moving their educational system into the 21st century. Sadly, our state leaders are so blinded by the numbers they just can't let go.

Friday, December 20, 2013

Is EVAAS a 'Clear Path to Global Ed Excellence' or Product of Grandiose Marketing?

According to a recent post by Audrey Amrein-Beardsley on her blog VAMBoozled! "VAMs (Value-added measures) have been used in Tennessee for more than 20 years" and that they are the brainchild of William Sanders, who was an agricultural statistician/adjunct professor at the University of Knoxville when introduced. Sanders simply thought, according to Amrein-Beardsley, "that educators struggling with student achievement in the state could simply use more advanced statistics, similar to those used when modeling genetic reproductive trends among cattle, to measure growth, hold teachers accountable for that growth, and solve educational measurement woes facing the state at that time."

Sanders went on to develop the TVAAS (Tennessee Value-Added Assessment System) that later became EVAAS (Education Value-Added Assessment System) which is now owned and marketed by SAS Institute in North Carolina. Today, SAS EVAAS is the "most widely adopted and used, and likely the most controversial VAM in the country" according to Amrein-Beardsley. According to her post "What's Happening in Tennessee?" these are some of the lesser known and controversial aspects of SAS's EVAAS:

  • "It is a proprietary model (costly and used/marketed under the exclusive legal rights of the inventors/operators.)" EVAAS is the property of a private company whose responsibility is to profits, not necessarily to what's good for kids or teachers. Four states, Tennessee, North Carolina, Ohio, and Pennsylvania, pay millions for the ability to use this Value-added model.
  • EVAAS is "akin to a 'black box' model. It is protected by SAS with a great deal of secrecy and total lack of transparency. This model has not been independently validated, and Sanders has never allowed access for others to independently validate the model.
  • "The SAS EVAAS web site developers continue to make grandiose marketing claims without much caution or any research evidence to support these claims. 
  • "VAMs have been pushed  on American public schools by the Obama Administration and Race to the Top."
  • SAS makes this marketing claim on their web site: "Effectively implemented, SAS EVAAS for K-12 allows educators to recognize progress and growth over time, and provides a clear path to achieve the US goal to lead the world in college completion by the year 2020."
There's no doubt that EVAAS or some other VAM product has been foisted on states and school districts by direct mandate by the Obama administration. It is also true that one could argue that EVAAS is a "black-box" model. It hasn't been independently studied and the inferences our state is making using this model have not been independently validated. SAS keeps the model hidden behind claims of proprietary ownership.

Finally, are the marketing claims grandiose as Amrein-Beardsley indicates? I would have to agree that the claim that "EVAAS is a clear path to achieve the US goal to lead the world in college completion by the year 2020" is pretty out there. On what research do they make that claim? What studies have they used to validate that claim? No research studies are provided. The SAS web site does employ a number of statements that do not offer any supporting research. But, then again, its about "marketing" a product, not about making a case for its validity. But the problem, is, SAS does not make those research-based claims anywhere else either.

But I set aside the concerns about the technical aspects of the model. For me, the whole problem behind EVAAS is that it elevates test scores to a level they do not deserve. North Carolina's state testing system is haphazardly assembled, and is far from being trustworthy enough to base any kind of high stakes decisions upon. I also fundamentally find something a bit inequitable in using EVAAS to determine any kind of rating for educators. I do think educators deserve to understand how those ratings are derived, down to the decimal points and computations. If the formula can't be explained so that educators can understand all aspects of it, it has no place in evaluations. 

But it seems there are issues surfacing in the birthplace of EVAAS. Interestingly, Amrein-Beardsley points out that Tennessee is having some trouble with its use. School boards across the state are increasingly opposing the use of TVAAS in high stakes decisions. Some of the reasons? According to Amrein-Beardsley:
  • TVAAS is too complex to understand.
  • Teachers' scores are highly and unacceptably inconsistent from one year to the next which makes them invalid.
  • Teachers are being held accountable for things that are out of their control, such as what happens to students outside the school building.
North Carolina has jumped on the VAM bandwagon and is holding on for dear life. To make the whole system work, our state has implemented the largest number of state tests in history. Let's just hope all this emphasis on test scores doesn't destroy our schools. I certainly hope we don't have to live with this for 20 years!

Wednesday, November 27, 2013

Misplaced Faith in Value-Added Measures for Teacher Evaluations

Due to Race to the Top and the No Child Left Behind waivers, 41 states have now elected to use Value-Added Measures or VAMs as a part of teacher evaluations. This is done, without regard to the limitations these statistical models have and without any supporting research that says doing so will increase student achievement. What are those limitations? In a recent post, the authors of Vamboozled, provided this post entitled  "Top Ten Bits of VAMmuniton" that educators can use to defend themselves with research-based data against this massive non-research-based shift toward a model of teacher evaluation that will most likely do more to damage education than No Child Left Behind or any other education "reforms" of modern times.

I recently uncovered a journal article entitled "Sentinels Guarding the Grail: Value-Added Measurement and the Quest for Education Reform." which describes a rhetorical study by Rachel Gabriel and Jessica Nina Lester which examined the discourse during a meeting of the Tennessee Teacher Evaluation Advisory or TEAC from March 2010 through April 2011. TEAC was a 15 member panel appointed by the governor of Tennessee to develop a new teacher evaluation policy. The authors of this study examined the language used by those on this panel as they deliberated through the various components of a teacher evaluation policy.

What is interesting about this study is that the language employed by those in this meeting betray some important assumptions and beliefs about teaching, learning, testing, and value-added measures that aren't entirely supported by research or common sense.

According to Gabriel and Lester, Value Added Measurement became a sort of "Sentinel of Trust" and sort of a "Holy Grail" in measuring teacher effectiveness during these meetings in spite of all the research and literature that points to its limitations. According to the author's of this study, here's some of the assumptions those in this TEAC meeting demonstrated through the language they used:

1) Value-added measures alone defines effectiveness.
2) Value-added measures are the only "objective" option.
3) Concerns about Value added measures are minimal and not worthy of consideration.

As far as I can see, there is enormous danger when those making education policy buy into these three mistaken assumptions about value added measures.

First of all, VAMs do not alone define effectiveness. They are based on imperfect tests and often a single score collected at one point in time. Tests can't possibly carry out the role of defining teacher effectiveness because no test is even capable of capturing all that students learn. Of course, if you believe by faith that test scores alone equal student achievement, then sure, VAMs are the "objective salvation" you've been waiting for. However, those of us who have spent a great deal of time in schools and classrooms know tests hardly deserve such an exalted position.

Secondly, even value added measures are not as objective as those who push them would like to be. For example, the selection of which value added model to use is riddled with subjective judgements. Which factors to include and exclude from the model is a subjective judgment too. Choices of how to rate teachers using these requires subjective judgment as well, not to mention that VAMs are not entirely based on "objective tests" either. All the decisions surrounding their development, implementation and use require subjective judgment based on values and beliefs. There is nothing totally objective about VAMs. About the only objective number that results from value-added measures is the amount of money states pay consulting and data firms to generate them.

Finally, those who support value added measures often just dismiss concerns about the measures as not a real problem. They use the argument that VAMs are the "best measures" we've got currently as flawed as they are. Now that's some kind of argument! Suppose I was your surgeon, and used "tapping on your head" to decide whether to operate for a brain tumor because "tapping" was the best tool I've got? The whole 'its-the-best-we-have' argument does not negate the many flaws and issues and the potential harm using value-added measures have. Instead of dismissing the issues and concerns about VAMs, those who advocate for their use in teacher evaluations need to address every concern. They need to be willing to acknowledge the limitations, not simply discard them.

I offer one major, final caution to my fellow teachers and school leaders: it is time to begin really asking the tough difficult questions about the use of VAMs in evaluations. I strongly suggest that we learn all we can about the methodology. If anyone uses the phrase, "Well, it's too difficult to explain" we need to demand that they explain anyway. Just because something looks complicated does not mean its effective. Sometimes we as educators are too easily dazzled by the "complicated" anyway. The burden is on those who support these measures to adequately explain them and to support their use with peer-reviewed research, not company white-papers and studies by those who developed the measures in the first place.

Wednesday, June 5, 2013

5 Words Educators Need to Forget

You can tell a great deal about education by the words and phrases educational policymakers and educators are using currently. Not too long ago, I remember educators, administrators and policymakers throwing around the words “total quality management,” “outcome-based education,.” and "Site-Based Management." You don't hear those words as often for a variety of reasons. The ideas are no longer popular, or someone decided to repackage it and call it something else, but the words and language in vogue say a great deal about the values of those determining education policy and reform. It is with this thought in mind, with a bit of sincerity and lighthearted fun, I give you my personal list of “Most Currently Mis-Used Words in Education.” 

1. Value-Added: This word, obviously commandeered from business and industry, is my personal number one Worst Word in Education for a reason. As it's used in education, it seems to imply that our students, our kids, are things to which we make more valuable in some way by the processes and “education” we subject them to. By its nature, it implies that the object to which the value is being added, has no say or part in creating that value. At the worst, it is demeaning because it reduces the kids in our classrooms to objects or raw materials. This term has no place in education, unless of course, you are educating widgets.

2. Technology Integration: The definition of integrate is “to form, coordinate, or blend into a functioning or unified whole.” Educators have been talking for years about “technology integration” as if that’s somehow going to change things and students will suddenly learn more. The problem is, if we are trying to blend technology into a classroom that is already dysfunctional, or we’re trying to unite technology into an education system that already fails too many students, we get  a classroom where students use technology but still don't learn. We also get an education system that uses technology to perhaps simply streamline the process of failing too many students. The word “integration” when used with technology, implies that we can successfully blend all these wonderful devices into the classrooms we have and "Presto," we have successful teaching and learning. This naive view of technology and education has outlived its usefulness. This term should no longer have a place in our discussions about technology's role in education.

3. Technology Infusion: I’m not sure this word is any better than the word "integration." Infuse means “to cause to be permeated with something (as a principle of quality) that alters usually for the better.” Some in education have talked about “infusing technology in the classroom” but the problem with this word, like “integration” is that it is simply taking what exists and adding technology to it. What if the existing pedagogy or educational practice is bad? Will giving it an “infusion” of technology somehow make it better? Perhaps, but only if that “tech infusion” addresses the underlying problems to begin with. Tech infusion and tech integration are words that educators need to jettison. Both impart “salvation” abilities to technology that it simply doesn't have by itself.

4. Achievement: No, I do not advocate doing away with student achievement, after all, we're in the student achievement business. What I do advocate is that we drop the use of the word “achievement” when everybody knows what we’re talking about are test scores. Why not just say “test scores?" We know that that’s what is meant when policymakers and politicians start talking about achievement. Let’s keep in mind though, achievement and test scores are not entirely interchangeable terms, because whether or a single test score represents what a student has achieved is open for discussion and debate.

5. School Executive: What's wrong with "principal" or "administrator?" Does calling oneself an "executive" fundamentally change what we do and who we are? This trend to start calling school administrators an "executive" betrays thinking that executives somehow have more power or prestige. The truth is, you can change the name all you want, but unless some changes about the role or job, it is still what it's always been.

There's a great lesson in the current verbiage used in education. If you really want to assess the current feeling of educational reform and policymaking, pay attention to the language. It always betrays what the people who are making the rules are really thinking and what their real agendas are.