I will protect your pensions. Nothing about your pension is going to change when I am governor. - Chris Christie, "An Open Letter to the Teachers of NJ" October, 2009

Saturday, June 1, 2019

NJ's Student Growth Measures (SGPs): Still Biased, Still Used Inappropriately

What follows is yet another year's worth of data analysis on New Jersey's student "growth" measure: Student Growth Percentiles (SGPs).

And yet another year of my growing frustration and annoyance with the policymakers and pundits who insist on using these measures inappropriately, despite all the evidence.

Because it's not like we haven't looked at this evidence before. Bruce Baker started back in 2013, when SGPs were first being used in making high-stakes decisions about schools and in teacher evaluations; he followed up with a more formal report later that year.

Gerald Goldin, a professor emeritus at Rutgers, expressed his concerns back in 2016. I wrote about the problems with SGPs in 2018, including an open letter to members of the NJ Legislature.

But here we are in 2019, and SGPs continue to be employed in assessments of school quality and in teacher evaluations. Yes, the weight of SGPs in a teacher's overall evaluation has been cut back significantly, down to 5 percent -- but it's still part of the overall score. And SGPs are still a big part of the NJDOE's School Performance Reports.

So SGPs still matter, even though they have a clear and substantial flaw -- one acknowledged by their creator himself --  that renders them invalid for use in evaluating schools and teachers. As I wrote last year:
It's a well-known statistical precept that variables measured with error tend to bias positive estimates in a regression model downward, thanks to something called attenuation bias. Plain English translation: Because test scores are prone to error, the SGPs of higher-scoring students tend to be higher, and the SGPs of lower-scoring students tend to be lower.

Again: I'm not saying this; Betebenner -- the guy who invented SGPs -- and his coauthors are:
It follows that the SGPs derived from linear QR will also be biased, and the bias is positively correlated with students’ prior achievement, which raises serious fairness concerns.... 
The positive correlation between SGP error and latent prior score means that students with higher X [prior score] tend to have an overestimated SGP, while those with lower X [prior score] tend to have an underestimated SGP.(Shang et al., 2015)
Here's an animation I found on Twitter this year* that illustrates the issue:


Test scores are always -- always -- measured with error. Which means that if you try to show the relationship between last year's scores and this year's -- and that's what SGPs do --  you're going to have a problem, because last year's scores weren't the "real" scores: they were measured with error. This means the ability of last year's scores to predict this year's scores is under-estimated. That's why the regression line in this animation flattens out: as more error is added to last year's scores, the more the correlation between last year and this year is estimated as smaller than what it actually is.**

Again: The guy who invented SGPs is saying they are biased, not just me. He goes on to propose a way to reduce that bias which is highly complex and, by his own admission, never fully addresses all the biases inherent in SGPs. But we have no idea if NJDOE is using this method.

What I can do instead, once again, is use NJDOE's latest data to show these measures remain biased: Higher-scoring schools are more likely to show high "growth," and lower-scoring schools "low" growth, simply because of the problem of test scores being measured with error.

Here's an example:


There is a clear correlation between a school's average score on the Grade 5 math test and its Grade 5 math mSGP***. The bias is such that if a school has an average (mean) test score 10 points higher than another, it will have, on average, an SGP 4.7 points higher as well.

What happens if we compare this year's SGP to last year's score?


The bias is smaller but still statistically significant and still practically substantial: a 10-point jump in test scores yields a 2-point jump in mSGPs.

Now, we know test scores are correlated with student characteristics. Those characteristics are reported at the school level, so we have to compare them to school-level SGPs. How does that look? Let's start with race.


Schools with larger percentages of African-American students will see their SGPs fall.


But schools with larger percentages of Asian students will see their SGPs rise. Remember: when SGPs were being sold to us, we were told by NJDOE leadership at the time that these measures would "fully take into account socio-economic status." Is it true?


There is a clear and negative correlation between SGPs and the percentage of a school's population that is economically disadvantaged.

Let me be clear: in my own research work, I have used SGPs to assess the efficacy of certain policies. But I always acknowledge their inherent limitations, and I always, as best as I can, try to mitigate their inherent bias. I think this is a reasonable use of these measures.

What is not reasonable, in my opinion, is to continue to use them as they are reported to make judgments about school or school district quality. And it's certainly unfair to make SGPs a part of a teacher accountability system where high-stakes decisions are compelled by them.

I put more graphs below; they show the bias in SGPs varies somewhat depending on grade and student characteristics. But it's striking just how pervasive this bias is. The fact that this can be shown year after year should be enough for policymakers to, at the very least, greatly limit the use of SGPs in high-stakes decisions.

I am concerned, however, that around this time next year I'll be back with a new slew of scatterplots showing exactly the same problem. Unless and until someone in authority is willing to acknowledge this issue, we will continue to be saddled with a measure of school and teacher quality that is inherently flawed.


* Sorry to whomever made it, but I can't find an attribution.

** By the way: it really doesn't matter that this year's scores are measured with error; the problem is last year's scores are.

*** The "m" in "mSGP" stands for "median." Half of the school's individual student SGPs are above this median; half are below.


ADDITIONAL GRAPHS: Here are the full set of scatterplots showing the correlations between test scores and SGPs. SGPs begin in Grade 4, because the first year of testing is Grade 3, which becomes the baseline for "growth."

For each grade, I show the correlations between this year's test score and this year's SGP in math and English Language Arts (ELA). I also show the correlation between this year's SGP and last year's test score. In general, the correlation with last year's score is not as strong, but still statistically significant.

Here's Grade 4:





Grade 5:





Garde 6:





Grade 7:





I only include ELA for Grade 8, as many of those students take the Algebra I test instead of the Grade 8 Math test.



Here are correlations on race for math and ELA, starting with Hispanic students:



White students:




Asian students:



African American students:



Here are the correlations with students who qualify for free or reduced-price lunch:



Finally, students with learning disabilities:



This last graph shows the only correlation that is not statistically significant; in other words, the percentage of SWDs in a school does not predict that school's SGP for math.

Sunday, May 26, 2019

NJ Public Workers' Health Benefits Are NOT Overly Generous: Some More Evidence

Warning: This one's going to get wonky...

The anti-public worker political wing in New Jersey -- which includes nearly all the state's Republicans and the machine Democrats -- spends a good part of its time trying to convince the state's citizens that health care benefits for teachers, cops, and other public workers are way too generous.

You'll hear lots of bemoaning of the "fact" that public worker benefits often fall into the "platinum" level instead of "gold," with no real explanation of what those terms actually mean.* You'll hear that public workers are enjoying a huge advantage in health care while private sector workers get worse care that costs them more.

These analyses leave out a few important details. First, there is a well-documented wage gap for New Jersey teachers and other public workers: when controlling for education, age, time worked, and other factors, these workers earn less than comparable workers in the private sector. Better benefits are an attempt to make up for that wage gap -- an attempt that generally fails, but an attempt nonetheless.

Second, when comparing health benefits, we should take into account three things:
  1. What is covered.
  2. Where enrollees can seek services.
  3. How much they contribute to pay for their premiums.
Balancing these three factors to determine the total generosity of a health care plan is tricky, and the data we have on private health care plans isn't great. Further, we must make an appropriate comparison: looking at a teacher with a masters degree's benefits and comparing them to a part-time worker with only a high school diploma is not a valid comparison, because the workers aren't being drawn from the same labor pool.

This said, it is instructive to look at how one of the factors above -- employee contributions toward health care -- compares between public workers and workers in the private sector. This is, in fact, what Mark Magyar did back in 2014 in an analysis for NJ Spotlight:
Today, however, while the cost of New Jersey public employee health insurance coverage remains the third-highest in the nation, most New Jersey public employees are paying more than the national average for state government workers toward their health insurance costs, an NJ Spotlight analysis shows. 
In fact, the average New Jersey government employee is paying more for individual health insurance coverage than government workers in any other state and the 10th-highest average premium for family coverage in the country.  
Further, state and local government workers are paying a much higher percentage of the cost of their individual health insurance policies than private-sector employees in New Jersey have been paying, and not much less than the percentage paid by the state’s private-sector workers for family coverage. [emphasis mine]
Magyar continues:
A comparison of the federal data for New Jersey private-sector employees with Pew’s state government report shows that private-sector employees paid an average of $374 per month for family health polices costing an average of $1,450, while government workers paid $328 toward policies averaging $1,561 per month.
The real difference in premium and cost share was in individual coverage, where New Jersey public-sector employees pay twice as much toward policies that cost almost one-and-a-half times as much. The average New Jersey private-sector employee last year paid $105 per month toward a policy that cost an average of $517, while state workers would have been paying $220 out of an estimated $758 monthly premium as of July 1, 2014. 
Let's break this down by percentages:

  • Public worker, family coverage: 21.0%
  • Private worker, family coverage: 25.8%
  • Public worker, individual coverage: 29.0%
  • Private worker, individual coverage: 20.2%
So the average NJ public worker who gets family coverage paid somewhat less for that coverage -- not enough to make up for the wage gap, but enough so my back of the envelope calculation comes to  an advantage of $552 a year.

An average NJ public worker who gets individual coverage, however, is paying a penalty compared to a private sector worker: $1,380 a year. 

Keep in mind: we're not necessarily comparing equal health benefits in terms of their coverage or their ability to allow enrollees to go out-of-network. In addition, we're not comparing similarly educated and experienced workers. But Magyar's analysis showed that there was little reason to believe New Jersey's private workers were paying a much greater share of the cost of their health care compared to public workers.

I thought it would be useful to take another look at this a few years down the road. My analysis is a bit different than Magyar's: my data source looks at all workers in New Jersey, not just the private sector, so the comparison is not "public-private" but "public-everyone."

I use the Chapter 78 schedule to determine the amount teachers and other public workers pay for health care. Granted Chapter 78 has lapsed; however, the NJ School Boards Association has declared that Chapter 78 is now the "status quo" for negotiations, so the use of the schedules is valid. Chapter 78 requires different contributions from teachers depending on their salaries. I choose three different levels of teacher pay: the 25th, 50th, and 75th percentile (the 50th percentile means half of teachers are above that salary, and half are below; the 25th means 1/4 are below the salary, and 3/4 are above).

How does this all play out?


Let's start with the family plans. If you are a teacher at the 25th percentile, you're making $57,200 a year. Compared to all workers, your contribution to your health care is relatively small: 14 percent, compared to over 26 percent for all workers (of course, you're also trying to raise a family in one of the costliest states in the nation on less than $60K a year).

If you're a teacher in the 75th percentile -- $84,400 a year -- you're paying substantially more of your premium on a family plan: the difference between you and all other workers is less than 3 percentage points.

The individual plans are another story: here, the lower-paid teacher is paying slightly more of their premium as a percentage than other workers. But higher-paid teachers are paying way more: 34 percent, versus 22.5 percent.

It's very difficult to compare the dollar expenses here, mostly because we don't know how the health insurance the teachers receive differs from the insurance all workers receive. But let's take the data from our source on premium costs and compare it to a plan offered by the state: in this case, NJDIRECT15. This appears to be a fairly generous plan, at least based on its cost: $36,084 for a family plan, and $12,617 for an individual plan. Contrast that to $20,669 for a family plan, and $7,074 for an individual plan, as the average for all workers in the state.

Again, it's hard to say for sure how the plans differ. I would assume the teachers' plan here is more generous based on the cost -- but who knows? Maybe it's just more expensive. In any case, here's how the contributions break down in dollars:


There are a lot of caveats here, starting with this: I couldn't find the 2017 premium costs for NJ teachers, so these figures are high compared to the figures for all workers just because of health care inflation over two years.

But even allowing for some of the difference to be taken up by inflation: in both the family and individual plans, New Jersey's teachers are contributing substantially more for their health care benefits compared to all state workers. Again, I'm using an example with a teacher plan that costs substantially more: it's a safe assumption it's more generous, but we just don't know. But even if it is, the increased cost is being borne, in no small part, by the teacher. In other words: yes, the teacher is probably getting better insurance, but she's paying for at least part of it out of her own pocket.

Let me be very clear: in no way does what I show here comprehensively prove that New Jersey's public workers, including teachers, are paying the similar amounts for similar health care compared to similar workers. What I am showing is that, post-Chapter 78, public employees are paying significant amounts for their benefits.

To recap:
  • We know public worker salaries lag behind similarly educated workers.
  • We know that good analysis shows any benefits advantages to public workers don't make up for that wage gap.
  • We know that public workers, post-Chapter 78, are paying significant amounts for their health insurance.
  • We know that the contributions of public workers toward their health insurance is not out of line with the average contribution of all workers.
Given all this: There is very little reason to believe any fiscal woe New Jersey is experiencing is due to public workers -- including teachers -- not paying their fair share for their health care.

I am all for bringing the costs of public employee health care down; after all, I am a taxpayer. We should be looking for savings through a variety of mechanisms, including tough negotiating with drug companies, hospitals, and insurance brokers.

But given the amount public employees are contributing to their own health insurance after Chapter 78, the state shouldn't be demanding more of them as its primary strategy for bringing down costs to the taxpayer. New Jersey's public employees are already doing more than their fair share to get the state's finances under control.


ADDING: There's an important point I should add about the comparison between similar workers' benefits:

For better or for worse, the United States has an employer-based system of health insurance. People with higher levels of educational attainment have a higher likelihood of having health insurance (see Table 3 here). It is, therefore, reasonable to compare the levels of health care benefits for similar workers as a way of determining whether some workers get better coverage than others.

Making this comparison, however, is not an endorsement of the current system -- at least, it isn't for me. I think it's wrong that some people have good health insurance, some have lousy insurance, and some have no insurance at all. I think it's wrong that income or education is a determinate of anyone's access to health care.

Everyone should have access to high quality health care at a price they can afford. I'm not at all a health care expert, and I won't weigh in on the merits of different systems. I'll only say the current system is obviously inadequate, and tying health care coverage to employment is probably one of the reasons why.


* You'll also hear we need to drop down the level of benefits for public workers or those benefits will be subject to a "Cadillac tax." Except that's not true: the tax has been delayed until 2022. Given its bipartisan lack of support, I doubt we'll ever see it implemented.

Sunday, May 19, 2019

Stuff Journalists Should Know About Charter Schools

I can't say I'm surprised, but it looks like Bernie Sanders' latest policy speech on education – where, among other things, he calls for a ban on for-profit charter schools and other charter school reforms -- has generated a lot of fair to poor journalism that purports to explain what charters are and how they perform.

Predictably, the worst of the bunch is from Jon Chait, who cheerleads for charters often without adhering to basic standards of transparency. Chait's latest piece is so overblown that even a casual reader with no background in charter schools will recognize it for the screed that it is, so I won't waste time rebutting it.

There are, however, plenty of other pieces about Sanders' proposals that take a much more measured tone... and yet still get some charter school basics wrong. I'm going to hold off on citing specific examples and instead hope (against hope) that maybe I can get through to some of the journalists who want to get the story of charters right.

Here are some things a journalist should understand before attempting to write about charter schools:

1) The CREDO studies are severely limited, and their reporting of effect sizes in "days of learning" is not warranted.

It seems that the CREDO studies of charter school effects continue to stand as the go-to source for journalists looking to find if charters "work." It should go without saying that relying on one methodology to make sweeping statements about the efficacy of a particular policy is highly problematic -- especially when the methods used in the CREDO studies have been so poorly documented.

The 2015 urban charter study seems to be the one cited most often in the press -- and yet journalists often fail to understand that it is not a national study. It is, instead, a study of multiple regions, picked by the CREDO team for... reasons? Here in New Jersey, for example, the CREDO study of the state found that gains were confined to Newark -- the only city from New Jersey included in the 2015 report. Why omit Jersey City and other districts? They don't say.

The CREDO studies essentially match charter students to a "virtual" public school student. That match, as Andrew Maul and Bruce Baker point out, is only as good as the data -- and the data isn't good. Students are classified as "poor" or "not-poor"; "student with disability" or "student without disability." When you have data this crude, you need to approach the results with great caution.

But the CREDO team does exactly the opposite: they translate the effects into a "days of learning" measure that is unvalidated. Time and again, I've seen journalists write that these studies show charter students demonstrate X number of days of extra learning. When you follow the citations in the reports, however, you find there is no basis for ever making that claim.

Instead, the aggregate effects sizes in the CREDO studies show a small gain: statistically significant largely because the sample sizes in these studies are large, but of little practical significance. I made this chart a few years ago:


The CREDO effects are a tiny fraction of the effect of family income on student achievement (as Sean Reardon notes, that income gap is getting wider). So any notion that charters are "closing the opportunity gap" is just not held up by CREDO's evidence.

2) Charter schools are publicly funded, but that doesn't automatically make them "public" schools.

We keep having this argument, but getting charter school supporters to see the obvious differences between charters and public district schools has become an exercise in futility. Inevitably, they define "public" as whatever they think it should be: "This charter school is funded with public money, and it's regulated by the state, and it's open to all, so it must be public!"

On its face, that statement is way too facile. Charter schools are often not open to all: they pick their grade levels, cap their enrollments, and sometimes have admission requirements that are onerous. A charter school can close at any time; when they do, a public school district must provide seats for the charter's former students.

Aside from this obvious difference, however, we continue to see differences between charters and public school districts in student rights, taxpayer rights, transparency, and organization: see here, here, here, here, here, and here for starters.

The issue of whether charter schools are public is complex. The courts are, in many ways, just starting to address the issue. Simple blanket statements that charter schools are public schools overlooks this reality.

3) There is some evidence that charter school expansion affects public district school finances, although context matters a lot.

My own research shows that charter schools in New Jersey impose fiscal pressures on schools through enrollment declines. But context is critical: not all states see the same effects. A new paper by Paul Bruno, for example, shows markedly different effects in California.

I don't think we'll ever find a simple answer to the question: "Do charters drain money from traditional public schools?" My research suggests in some states charter expansion increases spending per pupil in public district schools. Whether that spending helps students or is simply an increase in inefficiency is an open question.

There is, however, reason to believe that many charter schools are inefficiently small. And in some cases it appears that charter schools do not make locational decisions that maximize system-wide efficiency. Again, this is a complex issue. The simple claim that "the money follows the child," and therefore fiscal consequences are irrelevant, ignores some important realities.

4) The "best" charter sectors get their gains through increased resources, peer effects, and a test prep curriculum -- and not through "charteriness."

Writers like Chait are certain that "good" charter schools get their effects because they are free from unions and bureaucracy, and because they've figured out some curricular and instructional magic tricks. But there is very little evidence to support the claim. What the data show instead is that higher-performing charter schools have a formula for "success" that is rather simple, but difficult to bring to scale.

I'll use Newark as an example because it is often lauded, along with Boston, as a city that is expanding charter schools the "right" way. I've spent a lot of time looking at the data, and the explanation for Newark's charter sector gains -- which, by the way, do not close the opportunity gap -- starts with this table, taken from my latest report on New Jersey's charter schools:

Newark's charter schools employ teachers who have far less experience than teachers in the public schools. Consequently, their costs for salaries are much lower. This allows them to pay their teachers more than comparably experienced teachers in the Newark Public Schools; for that extra pay, the charters offer a longer school day (and year).

In the absence of other learning opportunities for Newark's children, extra time in school is a good thing. The question, however, is whether this model can be sustained without the charters having the ability to free-ride on NPS's teacher wages; in other words, would the charters still be able to recruit their constantly churning staffs if the teacher candidates didn't know they could eventually transfer out into better paying jobs with shorter hours/years?

Next -- and we've gone over this repeatedly, but some remain in denial -- there is plenty of evidence students self-select to remain in charters with "no excuses" disciplinary policies:


And no, the students don't just leave when they transfer to private or competitive admission high schools; if they did, all the attrition would be between grades 8 and 9.

Finally, as Daniel Koretz notes, "no excuses" charters freely admit they engage in a curriculum heavy on test prep. Bruce Baker and I have found evidence that Newark's charters do not deploy nearly as many resources into non-tested subjects as the public district schools. Focusing on the test will inevitably raise test scores, but there's a question as to whether those gains reflect "real" learning.

So the secret of "no excuses" charters is no secret: more time in school, peer effects, and a focus on test prep. Can this be scaled up? Even the recent study about scaling up Boston's charter sector shows only 17 percent of middle schoolers are enrolled in charters after expansion; that's not a lot, at least compared to places like Newark. What happens if charters get up to 40 percent? 50? 70? Will qualified teacher candidates still be easy to find? Somehow, I doubt it.

5) "Non-profit" charter schools can and do engage in behaviors that are not in the public interest.

Sanders' proposal calls for abolishing for-profit charter schools. Many charter advocates are getting behind this idea, as the outrageous behaviors of some of these operators is hard to justify. But what's missing from the conversation is the fact that nonprofit charters are hardly squeaky clean in their operations.

In a stellar bit of investigative journalism, The North Jersey Record lays out the issue: charters can organize as nonprofit "shells," and then contract out large parts of their operations to for-profit charter management organizations (CMOs). Often these charters make deals for their facilities that are not in the taxpayers' best interests; essentially, the public pays for buildings -- which, in some cases, they used to own! -- to move them into private hands.

New Jersey is considered to be one of the better states when it comes to charter regulation. And yet, because of poor planning on the part of the legislators who wrote the original charter school law, charters have been forced into facilities deals that are highly questionable. Unfortunately, it appears at least some charter operators saw opportunity in this situation, and have enriched themselves at taxpayers' expense.

It's also worth noting that some executives in the largest nonprofit charters have done very well for themselves:


Do public district school officials engage in self-serving behaviors? Sure -- but that makes the point. Operating a nonprofit charter school does not automatically mean the government shouldn't be closely monitoring your actions.


Journalists, I know this stuff is complicated, and you're on a deadline, and your readers aren't going to wade through all this (although... are you really sure about that?). I'm just asking you to stop for a couple of minutes before you jump into charter school policy and consider a few things. You can do better than Jon Chait -- much better.


ADDING: Sweet Christmas, the comments on Chait's piece are a dumpster fire on top of a superfund site. There appear to be people whose only function in life is to defend Chait from not revealing his conflicts of interest.

And I thought nj.com's comments were the worst. Is it always this bad at New York?

Saturday, May 4, 2019

How NOT To Evaluate Education Policy: A Newark Example

One thing I've learned from years of writing on education policy: the more convoluted the talking point, the more likely it's twisting the facts.

For example [all emphases below are mine]:
Today, African-American students in Newark are four times more likely to go to a quality school than they were in 2006. 
That's from a piece by Kyle Rosenkrans, who is launching something called the New Jersey Children's Foundation (because what we really need around here is another education "advocacy" group...).

I've seen variations of this talking point in other places:
“There is not a city in America that has experienced a greater expansion of educational opportunity than Newark over the last decade,” said Ryan Hill, CEO of KIPP New Jersey charter schools. He points to test score results showing African American kids in Newark are now four times as likely to enroll in a school where students outperform the state average than they were in 2006.
At least Hill is specific, unlike Rosenkrans's use of the watery term "quality school." But where did this data nugget come from? I can't trace the source of the "four times" claim, but the same methodology was used back in 2015, when Andrew Martin, who also worked for KIPP-NJ, wrote a piece in The 74:
The percentage of black Newark students attending a school that beat the state proficiency average has tripled in the past 10 years, and this increase can be attributed almost entirely to the growth of the charter sector.
Martin's method was updated in 2017 by Jesse Margolis in a report on Newark's schools:
Since 2014, the share of Black students in grades 3-8 in Newark attending a school that beat the state proficiency average has risen dramatically, reaching 27% by 2017. Black students in Newark are now three times more likely to attend a school that performs at or above the state average then they were in 2009, prior to the reforms.
So this talking point has been passed around and updated for four years... but, so far as I know, no one has stopped and asked whether this is a valid way to assess improvements in Newark's schools.

In my view, it isn't.

Let's set aside the very real problem of making any comparison between two groups on proficiency, simply noting for now that proficiency is a binary measure (you're either proficient or you're not) that can mask student growth or decline that does not cross the proficiency threshold. We'll also set aside the problem of comparing two groups -- here, black students in Newark and all other students in the state -- without making adjustments for socioeconomic status, special education needs, limited English proficiency, prior test score achievement, resources, and a raft of other things that affect test scores.

Instead, let's ask a simple question: Can a district improve the likelihood of students attending a "good" school without actually improving student achievement?

Yep:


Here's a hypothetical school district with 7 schools, each with 100 students (I am keeping the numbers simple to make this easy to follow). Let's say a school whose proficiency rate is 50 percent or higher is "successful." In our example, only one school meets this mark. The 100 students enrolled are in a "successful" school -- regardless of their own proficiency

The other schools are not "successful": only 20 students out of 100 are proficient. The probability of attending a "successful" school in this district is 14 percent.


Now a new school comes along and draws students from each "failing" school: 5 who are proficient, and 5 who aren't. Again, no individual students have changed their proficiency status. What happens?


60 more students are now in a "successful" school -- even though no individual student has moved from non-proficient to proficient. The district now has 160 students in "successful" school; the probability rises to 23 percent -- all without changing any individual student's proficiency!

But the district keeps hearing how great it's doing, so it keeps shuffling students around:


The new school draws another 10 students from each "failing" school: 5 proficient, 5 not. By now, the district realizes its schools are inefficiently small; so it closes one of the "failing" schools and distributes the students to the other "failures."


Again: no student has changed in their individual proficiency. But the probability of attending a "successful" school has shot up!


220 students are now in a "successful" school; the probability is now 31 percent, more than double.

Any district can easily raise the chance of attending a "successful" school simply by shuffling kids around.

This appears to be what happened in Newark. We know that charter school students tend to have higher prior achievement than NPS students; the Harvard report on Newark reforms said so. Simply clustering more "proficient" kids into charter schools will raise the probability of students being in a "successful" school.

So, no, this is not a valid way to judge whether Newark's schools have improved. Any measure of school effectiveness that can be increased without improving student outcomes isn't something we should be talking about.

So let's not.

Saturday, March 30, 2019

Jersey Jazzman: Year 9

I keep of list of stuff I'm supposed to blog about, and right now that list is long. I've yet to give my thoughts on Bruce Baker's excellent report on New Jersey school funding, which you really need to read. There's also the tax incentive nonsense happening here in Jersey, which is a cautionary tale for the rest of the country. There are the great reports about charter school abuses in New Jersey and California we need to examine.

I still haven't discussed my own work on how charter school growth negatively affects the finances of public school districts. I want to talk about my field trip to Kansas City to party/parry with labor economists. And it's time to go back and look again at all the problems with measuring student growth and attributing that growth to teachers and schools.

But before I get to all that, let me take a minute and talk about this blog on its ninth anniversary.

When I told Mrs. Jazzman last night I've been at this for nine years, she didn't believe it. But in my mind, it makes perfect sense: I started this thing in response to the nonsense coming from the Chris Christie administration way back in the spring of 2010. Now Christie's gone, after wreaking havoc on the state for two terms. Nine years seems just right.

At the time, Christie was running around the state demanding teachers take a pay freeze and pay more toward their health care. He claimed this would make up for a $800 million shortfall, which was largely caused because he refused to renew a millionaire's tax that had been in place for years. Of course, his claim wasn't true, and he forgot that freezing teacher pay would bring down income tax revenues for the state.

What frustrated me at the time was that no one in the press appeared interested in asking about the specifics of Christie's plan -- and yet the specifics were what made the plan viable or not. If there's a running theme for this blog over the last nine years, it's exactly that: In public policy, the details matter, because that's how we determine whether policies will be effective and whether they will have unintended consequences.

Take merit pay for teachers, which is getting a new look thanks to Kamala Harris's recent proposal to increase educator's pay. It sounds like a good idea... until you get to the details. How are you going to determine who gets merit pay? Through biased, noisy "growth" measures? What about the majority of teachers who don't teach tested subjects? Will you assess their availability for merit pay based on observation rubrics that have scant evidence to support them and are used in innumerate ways? Or will you use measures of growth not linked to tests -- measures for which we have absolutely no evidence of validity or reliability?

How will you distribute the teachers who do get merit pay? Will you force them to take more "difficult" assignments? What will you say to the parents of students who don't get a merit pay teacher? Will you be taking away money for merit pay from less "meritorious" teachers (the answer is inevitably: "Yes")? What happens to the pool of teacher candidates when you do that?

Or take school choice. It sounds like a good idea... but as we've seen, there are all kinds of unintended consequences that come from injecting market forces into public institutions. Same with high-stakes testing, or implementing new standards, or changing how school revenues are allocated, or any number of other education policies.

Questioning policies like these isn't nit-picking; it's doing the work. And if this little blog has helped inform the discussion, and put bad policies under the microscope, it's been worth the effort.

One of the nicest outcomes of writing this blog has been meeting so many dedicated stakeholders: parents, students, teachers, policymakers, analysts, and others who care enough about education to enter the conversation and defend their positions publicly. My blogroll on the left (which I try to keep current, but isn't always) has links to many of these folks. If you care about education, get to know them -- it'll be worth your time.

If you'll indulge me, a few more personal notes:

- I just finished a PhD in Education Theory, Organization, and Policy this fall. This has led me to become involved in several different education policy projects, even as I continue teaching in the classroom. I do think it's important to have working teachers -- or, at least, people who have significant prior experience working in schools -- involved in the education policy world.

But I still intend to keep this blog active, no matter what else I have going on. Sometimes education policy issues are best addressed through an objective policy brief or academic paper; other times, however, call for a little snark.

- The laziest critique of any analyst is to claim that they are not "objective." I'm all for being clear about positionality, but if the best you can lob at someone is where they get their funding or who they hang with, you're not doing the work.

A good analyst comes at an issue with an open mind, but not an empty one. I arrive at my positions based on study and practice. I'll have a good-faith debate with anyone, and I'll change my mind if you've got a good point -- I've done it plenty of times before. But I've largely given up sparring with the indolent. Life is short.

- Over the years, I've spent a lot of time writing blog posts. I don't know if my family considers that a sacrifice -- it has kept me out of their hair -- but Mrs. Jazzman and the Jazzboys have been very patient and supportive. Thanks, guys.

And so on to Year 10...

The Merit Pay Fairy says: "After nine years, dat Jazzman still ain't objective!"


Thursday, March 14, 2019

Only You Can Prevent Bad Tax Policy Discussions

When I started this blog up again earlier this year, I told myself I wasn't going to waste a lot of time debunking nonsense in the local media. Life is short and there's a lot to write about.

But some stuff I come across is so bad, I just can't let it go:
Q. Gov. Phil Murphy wants to raise the tax on incomes over $1 million, but Legislative leaders say they oppose that. Polls show overwhelming support among voters, so what gives on the politics? 
DuHaime: New Jersey families are overtaxed, and everyone knows if Trenton is talking about higher taxes, they’re eventually coming for you, too. Our elected leaders are supposed to do what’s right, not just what works in class warfare polling. From a policy perspective, New Jersey is too heavily reliant on our top earners. 1% of the taxpayers pay nearly 40% of the income taxes; 10% of the taxpayers pay nearly 70% of the income taxes. Look what happened when the financial markets nosedived a decade ago. The treasury of New Jersey took a huge hit because we rely so heavily on the highest earners. Finally, those with means can and do move to states, and they take their jobs, their spending, their philanthropy and their families with them. [emphasis mine]
We'll leave aside the myth of wealth migration and focus instead on the claim that the state is too reliant on the wealthy for tax revenue. There are at least three major problems with the statement above:

1) You must account for local taxes as well as state taxes in any meaningful analysis of tax burdens.

States vary significantly in what revenues for governmental services are provided by the state or by localities. That's why nearly every credible analysis of tax burdens by state combines local and state taxes.

2) Income taxes are only one source of revenue for the state.

States and localities have a variety of ways to collect revenues: income taxes, sales taxes, property taxes, gas taxes, fees, tolls, etc. Isolating income taxes, which tend to be less regressive, will give a false picture of the overall tax burden in a state. (Having a broad mix of taxes, by the way, is a strategy to address the issue of revenue instability due to economic changes.)

The good folks at the Institute on Taxation and Economic Policy have what I believe is the most credible way of comparing total tax burdens across income distributions. Here's their analysis of New Jersey:

The top 1 percent actually have a lower overall tax burden than middle income taxpayers.

3) The top 1 percent pay a big slice of the total income tax revenues because they earn a big slice of the total income!

This one really drives me bananas. According to the Economic Policy Institute, the top 1 percent of earners in New Jersey took 19.7 percent of the total income in 2015. Of course they paid more in taxes -- they made more of the money!

And again: it isn't meaningful to compare the 1-percenters' tax rate on one state tax to taxpayers at other income levels. You have to compare the total state and local tax burden to get to an analysis that's useful.

Here's a crazy idea: instead of giving valuable media space to "political insiders," why don't media outlets instead give the space to people who actually study this stuff carefully and can help citizens understand public policy issues?

Crazy thought, I know...

Jersey Jazzman (artist's conception)

ADDING: Sweet mercy, just make it stop:
Fewer still mention that the top 20 percent of households will pay 87 percent of the 2018 taxes — up from 84 percent in 2017. The bottom 60 percent of households will also pay no net federal income tax for 2018.
Say it with me: the wealthy pay more in income taxes because they make more of the money! And again, income tax is just one part of the total tax burden for an individual.