Pages

Friday, March 8, 2013

NJ Teacher Evaluation: Based On a Mistruth

The new proposals for changes in teacher evaluation in New Jersey - code name: Operation Hindenberg - have been made public by Education Commissioner Chris Cerf. And you know what's really amazing?

The first sentence in Cerf's proposal - the very first sentence - is a mistruth, slickly packaged to give the NJDOE plausible deniability for misleading the public about the need for a revision in teacher evaluations.

Here it is:
In schools, teachers and leaders have the greatest influence on student learning.
Let me start by turning to the always excellent Matt DiCarlo at Shanker Blog:
Specific wordings vary, but if you follow education even casually, you hear some version of this argument with incredible frequency. In fact, most Americans are hearing it – I’d be surprised if many days pass when some approximation of it isn’t made in a newspaper, magazine, or high-traffic blog. It is the shorthand justification – the talking point, if you will – for the current efforts to base teachers’ hiring, firing, evaluation, and compensation on students’ test scores and other “performance” measures. 
Now, anyone outside of the education research/policy arena who reads the sentence above might very well walk away thinking that teachers are the silver bullet, more important than everything else, perhaps everything else combined. I cannot prove it, but I suspect that many Americans actually believe that. It is false. 
As is so often the case with this argument, the sentence is carefully worded with the qualifier “at every level of our education system” so as to be essentially in line with the research. This is critical because it signals (very poorly in this case) that teachers are the most influential schooling factor in student achievement (which the blueprint calls “success”). And, indeed, this is the current empirical consensus. It means teachers have a larger effect (far larger, actually) than principals, facilities, textbooks, class size, technology, and all other school-related factors than can be measured. 
But in the big picture, roughly 60 percent of achievement outcomes is explained by student and family background characteristics (most are unobserved, but likely pertain to income/poverty). Observable and unobservable schooling factors explain roughly 20 percent, most of this (10-15 percent) being teacher effects. The rest of the variation (about 20 percent) is unexplained (error). In other words, though precise estimates vary, the preponderance of evidence shows that achievement differences between students are overwhelmingly attributable to factors outside of schools and classrooms (see Hanushek et al. 1998Rockoff 2003Goldhaber et al. 1999Rowan et al. 2002Nye et al. 2004). [emphasis mine]
How about a picture?
Now, some of you may have noticed that I cast things a little differently than DiCarlo. The first difference is that I'm changing "teacher effects" to "teacher and classroom effects." Why? Because I don't believe (and I'm not saying Matt does, but he doesn't broach the topic here) that teacher effects in a classroom can be disentangled from peer effects. In other words:  a teacher's "effectiveness" depends, in part, on the students who are assigned to her room. More on this later when we talk about SGPs.

The other difference, however, is more important: I believe that all of these things are "in-school" factors, including the student's background.

Students are in school, right? That makes them an "in-school" factor; you can't separate their effect from the school's effect, because they are in school. So the largest "in-school factor" affecting student achievement isn't the teacher - it's the student!

Yes, I'm being a little facetious, but only to make a point. DiCarlo makes it more seriously:
So, in my humble view, those who make the teacher effects argument to non-technical audiences should consider making it more clearly. Admittedly, the blueprint’s wording is unusually fuzzy. More commonly, the argument will use the phrase “schooling factor” or something similar. This is much clearer, and it is not misleading per se, but it is still probably insufficient for many people to get the distinction (a disturbing proportion of people making the argument don’t give any qualifier at all).
But I contend that this is exactly why people like Chris Cerf make the assertion in the first place. Cerf is not a stupid man; he knows exactly what he is saying, and exactly what he means to convey to his audience. I know some people get squeamish when folks like me ascribe ulterior motives, but there comes a point where you have to be honest with yourself about what's going on:

This mistruth is part of a deliberate attempt on the part of the NJDOE to convince the public that the biggest problem with New Jersey's schools is poorly performing teachers.

Keep this in mind as we continue to explore these proposals, which include the application of an evaluation tool that cannot be used, by design, to evaluate teachers. Yes, you heard me right.

NJ's New Teacher Evaluation System: Operation Hindenberg

7 comments:

  1. I'm curious how you came up with the graphic above - none of the referenced articles indicated the 12% variable you mention, and one of the articles gives an in-depth explanation describing how teacher effects vary significantly, with teacher effects being much stronger in certain grades, for example, and (more importantly) in lower-SES schools.

    There's certainly no question that individual factors, home variables, previous achievement, etc. also contribute to student achievement, but I'd hardly say (based on my experience, my own reading of research, and the studies you've cited) that teacher quality is of insignificant consequence.

    I'd also suggest deconstructed the 60% figure - student IQ, for example, is being lumped together with behavioral variables, mental health status, family poverty, community violence, etc. When deconstructed, it's quite possible that teacher effectiveness actually comprised one of the most significant variables in the process of education.

    ReplyDelete
  2. Ed look at the paragraph above:

    "Observable and unobservable schooling factors explain roughly 20 percent, most of this (10-15 percent) being teacher effects."

    I split the difference to get 12%.

    I've never said teacher effect is insignificant; quite the opposite. Read here:

    http://jerseyjazzman.blogspot.com/2013/02/pundits-listen-to-teachers.html

    ReplyDelete
  3. I'm happy to find numerous useful info here in the post. I would really like to come back again right here for likewise good articles or blog posts. Thanks for sharing...
    utkal university distance education

    ReplyDelete
  4. Ah, I missed that you had quoted from another blog. I'd suggest going to those links/references - not sure where Matt DiCarlo got those figures of 20%? I checked his references and they aren't lining up, at least not as a blanket statement.

    ReplyDelete
  5. Ed, some of Matt's sources are behind paywalls, but here's Goldhaber in another piece:

    http://educationnext.org/the-mystery-of-good-teaching/

    "More recently, researchers have sought to isolate teachers’ contribution to student performance and assess how much of their overall contribution can be associated with measurable teacher characteristics, such as experience and degree level. Economists Eric Hanushek, John Kain, and Steven Rivkin estimated that, at a minimum, variations in teacher quality account for 7.5 percent of the total variation in student achievement–a much larger share than any other school characteristic.

    This estimate is similar to what my colleagues and I found: that 8.5 percent of the variation in student achievement is due to teacher characteristics. We found that the vast majority (about 60 percent) of the differences in student test scores are explained by individual and family background characteristics. All the influences of a school, including school-, teacher-, and class-level variables, both measurable and immeasurable, were found to account for approximately 21 percent of the variation in student achievement. This 21 percent is composed mainly of characteristics that were not directly quantified in the analyses. Since we used statistical models that included many observable school-, teacher-, and class-level variables–such as school and class size, teachers’ levels of education and experience, and schools’ demographic makeup–it is clear that the things that make schools and teachers effective defy easy measurement."


    Again, I'd argue with the language here: students are an "in-school" variable. And I don't know that Hanushek et. al. were really able to separate out teacher effect from classroom effect, including the effect of peers.

    Look, this really isn't a source of debate in the ed research community. Teacher variance contributes somewhere on the order of 10% to 15% to student achievement variance. Yes, that's important - no one would ever say otherwise. My point is that this use of language - "Teachers are the most important in-school factor" - keeps us from putting teacher effect in proper context.

    I say that's quite often done deliberately; it's certainly deliberate in the case of the NJDOE. We can have an argument about that, but there's no argument about the research consensus.

    ReplyDelete
  6. Oops - I meant to say teacher effect is 10-15% at most. Goldhaber's numbers suggest it could be even lower.

    ReplyDelete
  7. Thanks for that additional link - I definitely am speaking outside the context of a expertise when it comes to this particular line of research, so I appreciate you bringing some of that to my/our attention.

    Before moving back to your original post, I'd also say that I'm certainly speaking out of context of the entirety of the proposal you mention, so I'm just looking at the one sentence you've posted. If my comments below are out of line based on other information you, feel free to call me out.

    So, here's my take on that statement, which says "In schools, teachers and leaders have the greatest influence on student learning." Basically, I separate variables into 2 categories - those which educators can have influence over (those "in school"), and those which we can't. I wonder if the original statement is referring to variables which educators have influence over. In other words, when comparing curriculum, school funding, opportunities to learn/time on task, academic engaged time, etc., teacher behaviors and admin behaviors are amongst the most important. If so, this would make sense to me, and would serve as an appropriate opening for a document proposing teacher accountability, however inappropriate the rest of the document might be in terms of evaluation/accountability procedures.

    ReplyDelete

Sorry, spammers have forced me to turn on comment moderation. I'll publish your comment as soon as I can. Thanks for leaving your thoughts.