Pages

Sunday, June 23, 2013

JerseyCAN Can't

Some stuff in the reformy world is just so dumb it makes your jaw drop:
A nonprofit education advocacy group released report cards today for all of New Jersey's the state's public schools and ranked elementary, middle and high schools based on test scores on graduation rates, the group's executive director Janellen Duffy said.
Formed in March, JerseyCAN is an off-shoot of a national non-profit group called 50-CAN, which advocates for education policies the group believes can close the achievement gap between wealthy and low income students. Former Gov. Tom Kean serves as co-chair of the group's board.
"We believe in using data to guide decision-making in education, whether it's decisions parents are making about schools or policy decisions that are being made at the state and local level," Duffy said. [emphasis mine]
So this has got to be some real high quality analysis, right? Some real in-depth breakdown of the data, designed to give us serious insight into New Jersey's public schools, yes?

As if:
The basics 
JerseyCAN’s 2013 school and district report cards look at data in five categories:
  • Student performance (Average percentage of students who are proficient or above across reading and math).
  • Subgroup performance (Average percentage of low-income, black and Latino students who are proficient or above in reading and math).
  • Achievement gap (Average difference between the percentage of low-income and minority students and percentage of non-low-income and white students who are proficient or above in reading and math).
  • Performance Gains (Average one-year change among a cohort of students who are proficient in reading and math).
  • Four-year cohort high school graduation rates.
Uh-huh. And on what research basis have you decided that all of these metrics should be equally weighted when determining a school's rank? Where is the rationale that "Student performance" is equally as important as "Performance Gains"?

[chirp, chirp, chirp...]

Actually, I should admit that I'm assuming JerseyCAN weighted everything equally; I wouldn't know, because they don't tell us how they weight the categories in their methodology. In fact, the documentation is so light I challenge anyone to look at it and replicate their findings, a basic test of research validity. But, even then, the metrics used here are severely flawed:

  • Proficiency rates are weak metrics; they are not the same as actual test scores, and can be quite different from other test-based metrics.
  • Averaging sub-group performance is a ridiculous no-no. By doing so, JerseyCAN is essentially equating students who are black, Hispanic, or qualify for free/reduced price lunch. That's absurd on its face.  (by the way, folks: NJDOE uses the term "Hispanic," not "Latino.")
  • It's impossible to judge the "achievement gap" of a school that is socio-economicially or ethnically homogeneous. New Jersey is a highly-segregated state; how can JerseyCAN assign a score for the "gap" to a district that has, for example, no black students? (I probably shouldn't be so hard on JerseyCAN for this - after all, the NJDOE does essentially the same thing.)
  • JerseyCAN tries to make up for this by apparently excluding schools that don't have a certain percentage of students in a particular subgroup. I say "appears" because, again, there's no documentation on the criteria they use for doing this.
  • The "performance gains" description is written so poorly I can't say for sure what it is; I think it refers to proficiency rates. See above.
This lazy, hack-junk approach leads to some bizarre outcomes. Take the high schools: here's a map, courtesy of NJSpotlight, that gives average SAT scores for high schools in New Jersey. Elizabeth High School, ranked #21 by JerseyCAN, has an average SAT score of 1308 (100% participation). Drive down Route 28 a few miles and come to Westfield High School, ranked #59 by JerseyCan; its average SAT score is 1741 (93% participation). Elizabeth's graduation rate is 86%; Westfield's is 98%.

I'm not going to say Westfield is a "better" high school than Elizabeth, because the schools' student populations are so different. But it's absurd to say Elizabeth has better "student performance" than Westfield.

You'll also notice a bunch of competitive admissions magnet schools clustered at the top of JerseyCAN's rankings. Gee, what a surprise...

What's the point of all this? What are we learning from this amateurish presentation of data that provides no context for understanding these largely arbitrary rankings? I keep going back to this quote from Education Commission Chris Cerf from when he first introduced the state's new school performance "report cards":
This data “will make clear that there are a number of schools out there that perhaps are a little bit too satisfied with how they are doing when compared with how other schools serving similar populations are doing,” Cerf said.
This is all about using data in a lame attempt to sow seeds of doubt about New Jersey's outstanding public schools - especially the suburban schools that have, so far, rejected the reformy prescriptions of education officials like Cerf and corporate reform supporters like JerseyCAN.

Sorry, JerseyCAN, but you're going to have to raise your game if that's the goal. This presentation is so embarrassingly inept that the only doubts you've raised are about your own abilities.

ADDING:

This is more than a little snarky, but I think it speaks volumes about JerseyCAN in particular and reformy "advocates" in general. Here's a screen capture of JerseyCAN's front page:



Seriously: what kind of an Executive Director puts a big picture of herself on the front page of her organization's website?

As Derrell Bradford of B4K infamously said:



"Growing your movement is about advancing the people that advance reforms, not the reforms themselves."


Says it all, doesn't it?

2 comments:

  1. In addition, as Daniel Koretz (2008) and many others point out, you CANNOT accurately asses the achievement gap or achievement gains using percent passing/proficient. The results are incorrect Since those two measures are incorrectly calculated, the entire report card is severely flawed beyond any usefulness.

    Further, as the College Board correctly points out, SAT scores should never be used to compare schools or states because so much of the variance in scores is explained by participation rates.

    Pretty much every metric in the study violates basic standards of data use. But that did not stop them, obviously.

    ReplyDelete
  2. I think that you are missing the point here Mark. The methodology may be flawed, sure, but there is a lot of use in compiling these kinds of lists. Most of the "top schools" reports that come out year after year name the same group of schools. But, across the state, this kind of single item list is very restrictive. Poor families cannot move to West Windsor, or other districts for the schools. The top ten lists of schools are especially useful because they say which schools are doing better in different sets of circumstances. Also, its a nice way to give kudos to schools that dont normally get them for doing somethings right, like say Riletta T. Cream school in Camden. It is a traditional public school, they made the list, and I say "good for them"

    ReplyDelete

Sorry, spammers have forced me to turn on comment moderation. I'll publish your comment as soon as I can. Thanks for leaving your thoughts.