Pages

Monday, July 9, 2012

Arne Duncan Does Not Get It

Diane Ravitch points us to this conversation with Secretary of Education Arne Duncan about using test scores in teacher evaluations. Just watch the first minute or two in the clip to see Duncan, once again, prove he has no idea what he is talking about:
So again, let's sort of look at where we are as a country. For decades, zero percent of a teacher's evaluation was based upon whether their children were learning.
That's hogwash for two reasons. First, principals and supervisors have always used student learning in teacher evaluations. Art teachers are judged by their students' art; math teachers are judged by their student's scores in a variety of standardized and other tests. It's ridiculous and ignorant to assert otherwise simply as a justification for Race To The Top.

Second: using a Value-Added Model or a Student Growth Percentile based on standardized tests is not the only way to judge whether "children were learning." Duncan's blind faith in bubble tests as reliable indicators of student learning is bad enough; he must know that VAMs and SGPs have huge error rates when applied to judging individual teachers. If he doesn't, he is incompetent and needs to be fired immediately.
In fact we had states that had laws on the books, including, I think, New York, that actually prohibited the leaking of student achievement in teacher evaluation. That's makes no sense whatsoevcer. That's one extreme. The other extreme is 100% of a teacher's evaluation is based on a student test score, which also makes absolutely no sense. Go back to what I said before: multiple measures, looking at a set of different things. Student growth and gain, being what we said was a significant part, and whether it's 30 or 20 or 40 or 50. The honest answer is we as a country don't know what the right percentage is there.
Mr. Secretary, I know this is a little tricky, but I think you can get this: it doesn't matter if it's 30 or 20 or 40 or 50, or 90 or even 5. When a rigid, quantifiable, highly variable metric is added to an evaluation, it doesn't matter how much weight it's given: it becomes all of the decision.

I've used other analogies to make this point, but let's try something else. You play basketball, right Mr. Secretary? Well, suppose we choose the winner of the NBA Celebrity All-Star MVP Award based on the following criteria:

  • Offense: scale from 1-4, 45% of evaluation
  • Defense: scale from 1-4, 45% of evaluation
  • Shooting percentage: exact ratio of shots made to shots taken, 10% of evaluation
Now, let's compare your performance to Kevin Hart's. We'll give you a 4 on offense, and a 3 on defense. Kevin gets a 3 on offense and a 4 on defense (it's a hypothetical, sir - relax). But let's say you went 5 for 11 from the field, and Kevin went 1 for 2. By our rubric, Kevin is the better ball player. 

And we could adjust all of these percentages so that shooting counts for as little as 1% or as much as 99%; the way we've set things up, it is always the deciding factor. Arguing about the percentage is pointless: the rigid, quantifiable metric is all of the decision, no matter what percentage of the evaluation it makes up. Because shooting percentage is much more variable than our offense and defense rubric, it carries more importance.

(It's also worth pointing out that shooting percentage can be accurately measured; a teacher's effect in student scores cannot.)

Now, I can imagine that Duncan might argue that he would never condone using such fine distinctions to makes high-stakes decisions. I imagine him saying he just wants to identify the tail ends of the distribution: just the very best and the very worst teachers.

The problem is that any evaluation system that mandates the use of a quantifiable metric to divide teachers into levels of effectiveness - and that is precisely what Race To The Top does (p.11) - will always have cut off points. And these metrics, with their phony precision, will place teachers into those categories with high-stakes consequences.

Everyone needs to get clear on this: when a reformer blathers on about how only part of the evaluation is based on standardized tests, it's a sure sign they haven't done their homework. No one would make this assertion if they had really thought these schemes through to the end. 

Our new Secretary of Education

1 comment:

  1. Thank you.

    Without a doubt, the main goal of any of these metrics is to scare teachers into silence.

    ReplyDelete

Sorry, spammers have forced me to turn on comment moderation. I'll publish your comment as soon as I can. Thanks for leaving your thoughts.