Seriously?After first proposing that scores would amount to 35 percent of a performance evaluation for math and language arts teachers in grades 4-8, Cerf yesterday said that total would be trimmed to 30 percent for next year.In addition, he said only the scores of students who had been enrolled with a given teacher 70 percent of the year would be applied to his or her rating. The previous level was 60 percent. [emphasis mine]
The notion that this little bit of mucking around with percentages makes up for the vast, serious problems with AchieveNJ, the new teacher evaluation system, is an act of enormous self-deception on the Commissioner's part. Does he really think this matters? Does he really think his critics will pipe down after being thrown this crumb?
As I've explained many times, it just doesn't matter if the test scores count for 50% or 10% or 90% of a teacher's evaluation: some of the evaluation, all of the decision. The NJDOE has tried to gloss over this inconvenient fact, but playing with numbers isn't going to make the problem go away. Cerf's silly proposals only serve to highlight, once again, that no one in charge of making New Jersey's education policies under Chris Christie seems to have any idea of what they are doing:
“There was a feeling that 35 percent was too much, and reducing it by 5 percent made sense,” said Arcelio Aponte, the board president, after the meeting. “Based on the many discussions we had on this, I think 30 percent is a fair number.” [emphasis mine]Oh, I see: we're now making decisions based on our "feelings." Gosh, how nice. But maybe we should also consider the "feelings" of the teachers who will have to labor under this innumerate, illogical system of evaluation, huh? Or the students who will have their teachers' livelihoods in their small hands?
Of course, the NJDOE has a defense they use when caught making policy based on their "feelings"; they whip out the Gates MET study:
That's exactly right - it's ONE study. I'll spend some time soon looking at the critics of MET; however, even if it was universally well-regarded, ONE study is hardly enough evidence to make wide-scale policy changes that force high-stakes personnel decisions for schools.The only dissent on the board came from Ronald Butcher, who ended up abstaining in the otherwise unanimous vote. He said afterward that the research backing the use of student test scores in general and “student growth percentiles” in particular was inconclusive.Cerf and others on his staff have repeatedly pointed to a recently completed Measures of Effective Teaching (MET) Project that tracked more than 3,000 teachers and backed the use of test scores in measuring their practice, along with classroom observations and student surveysBut that project has its detractors as well, and Butcher after the meeting said he hopes there will be a further airing of the different points of view.”This is not just about New Jersey, but in other states, too, where there are concerns,” he said. “If you look at the research, there are some issues with it. The MET study is just one study, and there is a lot of research calling it into question.” [emphasis mine]
But there's an even bigger problem with NJDOE constantly referring to MET: it says nothing about Student Growth Percentiles (SGPs), the method used in AchieveNJ! The MET report looks at Value-Added Models (VAM) in teacher evaluation, but we aren't using VAMs in New Jersey; we're using SGPs. Using MET to justify their policies is an indication of gross ignorance or willful deception from NJDOE.
I'm sorry to have to report this, but AchieveNJ is a disaster in the making. Of course, it was doomed from the start: appointing unqualified people to a task force on teacher evaluation while ignoring the voices of educators and researchers is a sure way to develop a system that lacks any credibility with the stakeholders. But this is what Chris Christie wanted, and Chris Cerf is going to give it to him. Cerf can make a feeble attempt to change the numbers and try to cover up the stench, but it won't help; the rot runs down to the core.
At some point, Cerf's NJDOE is going to have to deal with the pitiable truth that they are in way over their heads. We can only hope that the damage these sadly ill-informed bureaucrats do to the teaching profession isn't permanent.
Accountability begins at home.
Oh, where to start, where to start... I haven't seen much discussion on my totes FAVE parts about our shiny new teacher eval. It's been sooooo much fun being in one of the pilot districts!! It's a party every day!!
ReplyDeleteAt first I thought my favorite part was the Danielson "Framework for Teaching" model we're using to judge effective practice. The trainer we had from Danielson was really quite excellent. But as a special ed teacher, I was most interested in the part when she told us that Charlotte Danielson is working on a framework for special ed teachers since her original Framework isn't appropriate for special ed practitioners. But hey--we're using it anyway! Party hardy.
But now that we're almost through our pilot, I think my absolutely fave part is the computerized standardized test that gives us this awesome data to measure student growth, which determines our effectiveness and in turn, our tenure and job. It was definitely a total party watching my special ed kids, many with ADHD, have sooooo much fun in the computer lab with 25 screens flashing away all around them, clicking away at all the colorful dots. They loved it so much they clicked through those test items before even I was able to read the questions (not out loud, of course)!! Some of them were just mesmerized by that clock that comes up to tick down the time just before the question times out! They loved clicking on it as they watched it tick away.
Yeah, I think all that happy clicking and data it provides--including that SGP that isn't designed to measure the cause of the students' growth (or lack thereof)--upon which soooo much is based, is my new fave part!
Am I the only one???
In Maryland, the state is requiring all districts (including the two that did not sign the RTTT application) to make the state NCLB exam (MSA) 20% of a teacher's evaluation.
ReplyDeleteI contacted the state employee in charge of this and asked how did they determine 20% was the best, most effective percentage for determining teacher value.
He said there was nothing to support that or any percentage. He did write back, though, that the MSEA originally proposed 30%.