So, a brief introduction to VAM:
- I'd suggest anyone not familiar with the basics of VAM start here: David Leonhardt in the NY Times:
A few months ago, a team of reporters at The Los Angeles Times and an education economist set out to create precisely such a consumer guide to education in Los Angeles. The reporters requested and received seven years of students’ English and math elementary-school test scores from the school district. The economist then used a statistical technique called value-added analysis to see how much progress students had made, from one year to the next, under different third- through fifth-grade teachers. The variation was striking. Under some of the roughly 6,000 teachers, students made great strides year after year. Under others, often at the same school, students did not. The newspaper named a few teachers — both stars and laggards — and announced that it would release the approximate rankings for all teachers, along with their names.- Next, take a look at the LA Times project about VAM that Leonhardt references. But be warned; the LA Times is taking an advocacy position here:
- As I've written before, I think the LA Times is on dangerous ethical ground here; they are making money by publishing this data, so how can they be expected to report objectively on the controversy surrounding VAM?What is "value-added" analysis?"Value-added" analysis is a statistical method that estimates the effectiveness of a teacher or school by looking at the standardized test scores of students -- in this instance, math and English scores on the California Standards Tests. Past scores are used to project each student's future performance. The difference between the child's actual and projected results is the estimated "value" that the teacher or school added (or subtracted) during the year.
Yes, it is controversial. The Economic Policy Institute has published the most comprehensive set of objections to date:
A review of the technical evidence leads us to conclude that, although standardized test scores of students are one piece of information for school leaders to use to make judgments about teacher effectiveness, such scores should be only a part of an overall comprehensive evaluation. Some states are now considering plans that would give as much as 50% of the weight in teacher evaluation and compensation decisions to scores on existing tests of basic skills in math and reading. Based on the evidence, we consider this unwise.The objections include:
- State standardized tests were never meant to measure teacher effectiveness, and are not good indicators of a teacher's abilities.
- State standardized tests have many documented problems: inconsistency year-to-year, inconsistency state-to-state, cheating, poor test design and grading standards, etc. It's inappropriate to make high-stakes decisions with such unreliable instruments.
- VAM would only be applicable to between 10% and 20% of the teaching staff in a school district, because so many of the teachers do not teach students who take standardized tests (K-2 teachers, Grade 10-12 teachers), do not teach subjects that use standardized tests (music, art, library, PE, foreign language, etc.), or work in a collaboration with classroom teachers (guidance, reading specialists, ESL, etc.).
- The technical issues with the statistics used in VAM and their application are well-documented. In a striking report, Mathematica Policy Research found error rates of 25% to 35% when attempting to use VAM to compare a teacher's performance to average.
- VAM assumes that children are randomly assigned to classrooms - every teacher knows nothing could be further from the truth.
- While some VAM models try to account for things like poverty, the data on individual students is so spotty, inconsistent, and poorly delineated that it simply can't account well for the life of the student outside of school. Also, there is a real danger from using racial classifications in teacher evaluations.
- VAM assumes many factors outside of school that influence a child's life don't change over time; again, every teacher knows nothing could be further from the truth (ever teach a child whose parents are going through a divorce?).
- VAM "scores" can easily lead parents and administrators to believe that VAM is far more accurate than even its champions claim. Frankly, it takes a level of sophistication about statistics to interpret VAM evaluations correctly that few people possess.
- VAM will inevitably lead to a huge legal mess, with hearings about teacher dismissals leading to extended legal actions, based on the documented unreliability of VAM.
- Finally: as in any use of data to make high-stakes decisions, there are inevitably major screw-ups: like this teacher who was out of the country on a Fulbright scholarship last year and now wonders what data they used to rate her.
Many of the links above are to Rutgers professor Bruce Baker's School Finance 101 blog. Every teacher owes it to him or herself to spend a few minutes looking at Bruce's writings on VAM.
For me, the bottom line is this: while the quality of a teacher may be the most important factor in the life of a student in school, there are many, many other factors that affect learning. To discount those factors and use one annual test to make decisions about which teachers are fired and which get paid more is, in a word, insane.
And make no mistake: this stuff is coming. It's already being used in Washington DC, where hundreds of teachers were fired based on their VAM assessments.
As Bruce Baker points out:
I guess that one could try to dismiss those moral, ethical and legal concerns regarding wrongly dismissing teachers by arguing that if it’s better for the kids in the end, then wrongly firing 1 in 4 average teachers along the way is the price we have to pay. I suspect that’s what the pundits would argue – since it’s about fairness to the kids, not fairness to the teachers, right? Still, this seems like a heavy toll to pay, an unnecessary toll, and quite honestly, one that’s not even that likely to work even in the best of engineered circumstances.Fellow teachers, heed my warning - this fight is coming to your classroom this year. If NJ had won its Race To The Top grant, nearly $50 million would have been spent to implement a computerized tracking system for student progress. What do you think NJDOE was going to do with the data?
We need to start informing each other about this issue now.
In The Know: Are Tests Biased Against Students Who Don't Give A Shit?
2 comments:
Well done. I'm glad to be able to help out in the future in any way I can. I do hope you keep up the blog. But I certainly understand how it can eat up time and cut into other productivity!
You know, I wrote that yesterday, and did three posts tonight! But when concert season starts...
I really can't say enough good things about your work, Bruce. Teachers have to get hip to this stuff, and I appreciate the way you take the complex and make it understandable to those who are willing to give a little effort. You and Paul Krugman share a gift (I hope you take that as a compliment!).
The little Superman logos on your last scatterplots were a riot!
Post a Comment