The new reports include the usual information on state test results and high school graduation and dropout rates. But the first page provides what Education Commissioner Chris Cerf said is what most parents want to know: how their school achievement compares with other “peer” schools, and whether the school is doing a good job preparing students for college and careers.
“All the action is on the front page,” Cerf said in a media webinar on the new reports Tuesday. [emphasis mine]It certainly is: here's an example of the first table you'll see in your local school's report:
We'll get to each of the categories later this week, but let's start with this idea of a "peer group." What is that, exactly?
But the biggest bombshell may have been a new concept called "peer group ranking." For the first time, the state is comparing each school with about 30 others deemed to be demographically similar.
[...]
The peer group rankings replace the old "District Factor Groups" that educators have used since 1975, which placed all school districts in groups — labeled "A" through "J" — according to factors such as income, education level and occupational status of the adults. The state says the new peer groups , which are based on the percentage of kids who get free and reduced lunch, plus the percentage in special education and with limited English proficiency, are better because they more accurately reflect the students in the schools . They also measure data school-by-school, instead of district-by-district.
The new peer groups, which are unique for every school, can also be readjusted every year, while the former DFGs were recalculated about every 10 years, based on Census data.
Schools in each group of 30 can be anywhere in the state, in places where parents may know little about the other schools, so it may make comparing difficult.This is important, so let's go over it again:
A school's peer group is unique for every school. Which means that Gates Elementary may have Walton Elementary in its peer group, but that doesn't mean that Gates is in Walton's peer group. And both schools could have Broad Elementary in their peer groups, but Broad doesn't have to have either.
If you're like me, you're probably thinking, "How does that make any sense?" Well, according to the NJDOE's white paper about peer grouping, this is a feature of a technique called "propensity score matching." Unlike DFGs, which are stable, every school gets it own peer group. Which means that your school might be most like another school, but that doesn't mean they are most like you.
Again, this is - to me at least - counterintuitive. The point of the whole thing, I thought, was to compare schools that were similar to find out who is doing the best. We could then, supposedly, look at the "best of the peers" and learn something from them that would help other schools get even better. But if a school's peer group is unique to that school, that sort of defeats the purpose: how can you rise to the top of your peer group if that group is constantly shifting?
At the very least, any stakeholder in a school is going to want to see who in their peer group is doing best on the measures given. But here's the problem: each school's progress report lists its peer schools, but it doesn't rank those peers in relation to each other.
Let's suppose Gates Elementary knows that it's at the 50 percentile for "Student Achievement" (again, we'll get to what that means later). Broad and Walton are in its peer group. Gates knows some of its peers are higher than they are; some peers are lower. But there is no way, based on the reports themselves, to know which of a school's peers are above them, and which are below.
And because the peer groups change for every school, no one can go back and reverse engineer the order of a school's peer group, based on the other schools' reports. The entire system is shrouded in mystery.
So why do it? Why come up with a system that isn't transparent, is more complex, and hides the best practitioners from their peers? Let's ask NJDOE Commissioner Cerf: what's the point of all of this?
State Education Commissioner Chris Cerf – who acknowledged the reports have mistakes — has made overhauling their format a major project in his efforts to improve schools. While he intends to intervene aggressively in failing schools, he said parents, boards and superintendents elsewhere should use this data to find ways to address weaknesses in their districts.
Some schools may have to grapple with unpleasant surprises. This data “will make clear that there are a number of schools out there that perhaps are a little bit too satisfied with how they are doing when compared with how other schools serving similar populations are doing,” Cerf said. [emphasis mine]Well, there you go: this is all about knocking you smug high-achievers down a peg. And NJDOE will take you down even if they have to issue an error-ridden report. What a lovely sentiment from the man who is in charge of the entire system to begin with...
Stand by: much more to come.
Hey, NJ public schools: I'm gonna put you in your place...
Great analysis. Another point worth making: If you want to sell across-the-board reforms that affect all schools--both troubled schools dealing with high levels of poverty and good schools in more affluent districts--you need to massage the data in order to obscure some obvious socioeconomic realities, and that's exactly what these reports do. They create the illusion of poor performance even in more affluent schools, and obscure the all-too-obvious actual relationship between poverty and poor performance. As you point out, by creating these peer groups, many good schools, including ones in affluent suburbs, look worse. But also, the fact that they've actually entirely removed the district factor groups from the report serves to obscure the in-your-face relationship between poverty and school performance. Now Cerf can run around advocating big (unproven) systemic changes instead of creating targeted solutions that address the elephant in the room: poverty.
ReplyDelete