I will protect your pensions. Nothing about your pension is going to change when I am governor. - Chris Christie, "An Open Letter to the Teachers of NJ" October, 2009

Monday, February 15, 2016

Only You Can Stop Data Abuse: Red Bank, NJ Edition

Data abuse in eduction policy has got to stop.

I freely admit that there can be different, equally valid ways of interpreting data. I am the last person to say a data analysis is always "proof" of a particular point someone wants to make. I am leery of the use of overly complicated statistical methods when simpler descriptive methods may be better.

But there is no excuse when a school "leader" misuses data to justify his or her practices and policies -- particularly when they do so at the expense of others. 

I'm cynical enough to know politicians and media pundits can and will do this all the time; however, when someone in a position of authority at a school or a district or a state agency or even the USDOE abuses data to make themselves look better and others look worse, it's completely unacceptable.

We are told over and over again by those in positions of influence that we have entered an era of "data-driven instruction." Well, if that's true, a leader is just as derelict in his duty when he misuses data as he is when he misapplies educational practices within the bounds of his authority.

This particularly irks me because we are seeing more "alternative" programs like Relay Graduate School of Education or the Broad Superintendent's Academy or SUPES that appear, to me, to have little interest in developing future educational leaders as good consumers of research and data. I'm not saying university-based programs don't often have flaws when it comes to training their students how to use data. I'm not saying that good training is a guarantee of future best practices.

But you can't expect America's students to engage in "rigorous" thinking when the leaders of their schools and their districts and their educational agencies brazenly misuse data. We have an obligation to model the sort of thinking we expect from our students; too often, we don't.

For example:

I've already written about the Red Bank Charter School in New Jersey. RBCS is a segregated school compared to its host district, which is a shame because the area, in my opinion, is ripe for a type of school consolidation which could lead to a truly integrated school district.

RBCS is applying for an expansion in its number of seats; supporters of the local school district are concerned the expansion will cause real harm to their district. But the charter justifies its application, in part, by claiming it is "outperforming" the Red Bank Borough Public Schools. RedBankGreen, the area's hyperlocal (which has done an excellent job covering the controversy), quotes the charter's officials:
The charter school officials reiterated their contention that the school outshines the district academically. “We are required outperform” by state law, [RBCS Principal and Superintendent Meredith] Pennotti said. “If we don’t, we’re put on probation or closed.”
Pennotti said the recently released PARCC test results demonstrate that the charter school eclipses the district schools, even taking into account the added challenges the district faces because a significant percent of its students haven’t learned English at home. She declined to immediately offer specifics, however, and said details would be included in a letter being sent to school parents Wednesday.(UPDATE: here’s the letter: RBCS_ParentNotice_PARCCScores) [emphasis mine]
Now, I don't know what the NJDOE told Pennotti, but I can assure you there is nothing in the state's charter school law that says a charter must outperform its host district. Yes, the charter's and district schools' test-based outcomes are compared in the charter's "performance framework." But even the state acknowledges something everyone in education knows (or should know):

You can't compare school outcomes without controlling for student population differences.

Even the NJDOE, which has engaged in some seriously awful data abuse over the years (things do seem to have improved somewhat lately), knows it makes no sense to compare a school with very few children in poverty to a school with many children in poverty. That's why they have peer groups (faulty as they are) for their school performance reports. That's why they developed SGPs (even if those don't fully control for those student differences).

Again, I don't know what the NJDOE told Pennotti, but she, as a school leader, should understand that simply comparing test-based outcomes without acknowledging student differences is just not warranted. In fact, she herself acknowledges this in her quote: she admits it's not fair to compare RBCS with RBBPS without "taking into account" student differences.

But then what does she do? If you click the link to the charter's letter to its parents, you'll find this:


I'm sorry if this is coming across like I'm picking on a mom-and-pop charter school. As I said before: I have no doubt that RBCS is full of dedicated teachers and staff, working hard on behalf of their deserving children. In the absence of proof to the contrary, I happily accept that RBCS is doing terrific things for its students.

But when a charter that serves a fundamentally different student population compared to its host district then beats up on that host for having lower test scores, that's just unacceptable.





When I compared the test-based outcomes of RBCS to RBBPS in my last post, I used a simple regression model to adjust for student characteristics. We can argue about the validity of the model, and that's fine; I'd never claim that one school was "better" than another based on test scores anyway, even if they are adjusted. What I will contend, however, is that if you don't even try to adjust for student differences, you're not playing fair.

When I did my original post, the latest test results from the PARCC had not been released. Here are some results from a simple regression model* using this new data and the entire state as a sample. We'll start with the most generous comparison for RBCS, Grade 4 English Language Arts (ELA):


Again: this is the most generous comparison out of the 11 that I've run. Does it look like RBCS is substantially outperforming RBBPS to you?

Here's the least generous comparison -- Grade 8 ELA:



Again: this in no way "proves" that RBBPS does a "better" job with its Grade 8 students than RBCS. What it does show, however, is that when controlling for student differences, there is no evidence RBCS "outperforms" its host district.

I've put the rest of these below with some explanation. And let me be clear about something else:

I don't expect RBCS to run this sort of analysis (although I do expect it from NJDOE when they make a decision about the expansion -- that's their job). I don't expect RBCS not to tout its successes. I don't expect any charter not to be proud of what it does. It's perfectly fine to be proud of your test scores and proficiency rates -- within proper context.

But when you use these test scores to pump yourself up at the expense of schools that are educating a fundamentally different population than your school, I lose all patience. It's an abuse of data and, frankly, an abuse of authority. It shows a lack of rigor and a disregard for the hard work of your colleagues in the public district schools. It's self-promotion at the expense of people who are doing a job you don't do.

Stop it. Now.



*ADDING: Here are the other regressions. The model uses PARCC mean scale scores as the dependent variable with two predictors: the school's percentage of free and reduced-price lunch students, and a three-year average of its special education percentage (I used that because I don't have 2014-15 special education data).

I didn't include Algebra 1 or Grade 8 math scores because students could take either, depending on where they are placed, in 8th Grade. The number of RBCS Algebra 1 test takers is so small it was suppressed in the data.

I plot the adjusted score against FRPL percentage to give the chart a little more context. I did run a model with the percentage of Limited English Proficient students at the school, but it was not a consistently significant predictor, so I left it out; that would likely change if we adjusted the sampling frame or the test outcome.

Again: the point here isn't to "prove" one school is better than another. The real question is: does RBCS consistently "outperform" RBBPS when "taking into account" differences in student populations? Clearly, the answer is: "no."

Caveat regressor.














No comments: