Against the recommendations of many researchers, and in contradiction to the evidence, the Department of Education is insisting on basing 45% of a teacher's evaluation on test scores. Let's leave aside the completely unworkable timeline and lack of oversight of the tests themselves and look at something that is, admittedly, a little technical but critical to understand.
There are two ways to evaluate a teacher's impact on a student's test scores. First, we can try to adjust for factors that are beyond a teacher's control: all of the things that happen outside of the classroom in a student's life, and the characteristics of the student himself. Value-Added Modeling (VAM) is an attempt to do just this. Essentially, we make a prediction about how well a student will do on a test by looking at a student's previous performance. We then compare that prediction to the actual score: if a student does better, or worse, we attribute that change to the teacher's efforts.
It's a method with huge, huge problems - not the least of which is that it assumes that a student's life doesn't change year-to-year. And it doesn't take into account that students aren't randomly assigned to teachers. But at least it tries to account for factors outside of the school.
The other way we can evaluate a teacher's impact on scores is a simple measure of the student's growth while working under that teacher. A Student Growth Profile (SGP) is just a straightforward assessment of how much better the student tested after time with the teacher than before: post-test score (at the end of the year) minus pre-test score (at the beginning of the year). That's it: no sense at all that there are factors other than the teacher in play here.
In other words: VAM is a bad attempt to account for things other than the teacher. But SGP doesn't even try.
Now, if I were running a true "pilot," I would try both methods, as well as a "control," so I could compare them all. I'd like to know how they stack up against each other; I'd like to know how they do compared against straight evaluations by superiors, which is what happens in schools now.
Is that happening in the pilot program? Look here and here - the pilot's creators' own words - and try to convince me it is.
This pilot is a huge joke. It is an embarrassment to the state and to everyone involved in it. That it is being treated seriously by politicians is evidence as to how low our conversation about education has sunk.
NJ Teacher Evaluation Pilot: Lost somewhere over the Pacific...
ADDING: Damn you, Bruce! Why do you ALWAYS have to say what I say, only way, way better?!?
Both you and Bruce have nicely demonstrated just how insane this "pilot" is and will be. Why would NJ persist in this wrong-headed policy direction, which is going to ultimately engender endless litigation when districts get sued by teachers who are canned because of crap data and crap data analysis?
ReplyDeleteSo, since we ARE talking about NJ politics, I have a gentle suggestion: Follow the money. NJDOE is chronically short-staffed and underfunded. Who has the subcontracts on all of this? Which organization and/or policy analysis shop will be conducting the evaluation studies? Beans to dollars says it will be organizations/people with the RIGHT political connections.
back to my own writing.....(too much at present).