Pages

Monday, March 19, 2018

The Facts About NJ Charter Schools, Part III: Segregation By English Proficiency

This week, I'm going into detail about a new report on New Jersey's charter schools I wrote with Julia Sass Rubin. In the last post, I showed conclusively that the charters enroll proportionally fewer special education students. In addition, the classified students charters do enroll tend to have less costly learning disabilities. This puts both fiscal and educational pressure on public district schools, which are forced to subsidize charters at the same time they must provide an education to students with special needs.

One of my more tenacious commenters keeps trying to make the case that the reason charters don't enroll as many special needs students is that they declassify special education students at higher rates than public schools. But there is no empirical evidence I am aware of to support this claim. Further, as I've showed before, NJ public district schools spend much more on the support services special education students need than charters. In addition, there are more support staff per pupil in the public schools than in the charters. All the evidence suggests the student populations of charters and public district schools are different.

I don't know why anyone would be surprised by this. The entire theory of charter schools is that they will enroll students who are a good "fit." Why, then, would we be surprised that the charter student populations aren't like the public school populations? Isn't that the entire point?

Keep this in mind as we now look at the differences in English language proficiency between public and charter school students.


Year after year, New Jersey's public district schools enroll many more Limited English Proficient (LEP) students proportionally than the charter schools.

Again, you can try to make the case that this is because the charters remove LEP classification more than public, district schools. But there's no evidence to back up that claim. Further, there is a significant incentive for charters to have students retain their LEP classification, as charter schools receive more funding if they have more LEP students.

The idea that charters are so much better than public district schools at teaching LEP students that they can immediately remove their status, even at a financial disincentive, flies in the face of logic. It's also contradicted by one of the other arguments charter cheerleaders often try to advance: that the difference in LEP classification in some cities is due to the location of the charters.

It is certainly true that the charters often tend to cluster in neighborhoods with smaller Hispanic populations  -- that is likely the explanation for the difference in LEP populations in Newark. But so what? The charters chose to locate in those neighborhoods -- now the district has to pay the costs of educating a concentrated LEP population. Considering that a district like Newark has been underfunded for years while the charters are "held harmless," this remains a serious problem.

Finally, let's consider individual communities, and how their charter sectors differ from public school districts:


As I've noted before, the racial profile of Red Bank Boro -- where the disparity in LEP percentage is the greatest in the state -- is very different than the profile of the area's charter schools:



The idea that the huge disparity in LEP rates between Red Bank Boro and the students attending charters* can be explained by either LEP declassification or location of the school is very hard to defend when it's clear that far more white students proportionally attend the local charter school. The much more plausible explanation is that "choice" has led similar families to "choose" the same schools. This lines up with a growing body of evidence that shows that parents rely on their social networks to make navigate a "choice" system.

All this said, look at some of the districts at the bottom of the table. In North Plainfield and New Brunswick -- communities with large rates of LEP classification -- the charter schools, as a group, actually enroll more LEP students.

As with special education classification rates, the data here show that the charter sector could be enrolling more LEP students. But why doesn't it? If charters are serving more LEP students in New Brunswick and North Plainfield, why aren't they serving at least a similar rate of LEP students in Jersey City or Morris or Passaic or Trenton?

I would suggest the data here shows that it's at least possible that charters could enroll more LEP students. Where then, has the state been during the last decade? Why aren't they demanding better from the entire sector? 

I'll talk about disparities between NJ charters and public district schools in socio-economic status next.


* To be clear: the disparity chart does not only include students who attend the local charter school; it counts all students who reside in the district but attend a charter anywhere in the state. So the "Charter LEP %" figure will not be the same as the local charter school(s) percentage.

Saturday, March 17, 2018

The Facts About NJ Charter Schools, Part II: Segregation By Special Education Need

In this series of posts, I'm breaking down a new report by myself and Julia Sass Rubin on New Jersey's charter schools. State data shows one incontrovertible truth:

New Jersey's charter schools enroll far fewer students proportionally who have learning disabilities, or who are Limited English Proficient, when compared to their hosting districts.

Here's a graph that shows this quite clearly:

Oh, sorry -- this graph isn't from our research. This graph is from a report published by the New Jersey Charter Schools Association, the state's biggest charter advocacy group.

Let me clean it up a bit for you...

It is, of course, completely inappropriate to use the same scale for measures that are as different as racial composition and special education classification. I would make my grad students resubmit their work if they ever tried to pull a stunt like this.

Still, you can clearly see that, according to the state's biggest charter cheerleaders, NJ charter schools enroll far fewer students proportionally who are classified with a learning disability, or who are English Language Learners.

Let's look at this in a more appropriate way. This graph is from our report (for real this time):


In our report, we compare all of the charter school students residing in a school district to the resident students who attend the public district schools. This method allows us to compare a community's charter students to its district students -- no matter where the charter students attend school. (I'll discuss this method in more detail later in this series.)

These findings are beyond question -- and they raise some serious issues. Even Chris Christie acknowledged that it costs more to educate a child with a learning disability; this particular fiscal burden falls hard on public district schools when charterization concentrates their proportion of classified children. It also makes comparisons between the academic outcomes of charters and district schools meaningless unless this disparity is accounted for.

The problem with most attempts to do this -- like the NJ CREDO study, which was commissioned by the state -- is that the statistical models employed use data wholly inadequate to the task. These data divide students into two groups: those with a learning disability, and those without. The problem is that classified students can have very different disabilities, and, consequently, very different educational needs.

As I've noted before, some disabilities, such as speech or "specific learning disabilities" (SLDs), are relatively low-cost. Others have a much higher cost. Guess which students are more likely to enroll in the charters?



The special needs students who are enrolled in NJ charters tend to have lower-cost disabilities the those in district schools. This analysis differs somewhat from above (see the report for details), but it matches our previous work. We're using 2016 data here; in that year, the state did not suppress data as they have done before.*

For a long time, charter cheerleaders have claimed -- with no empirical evidence -- that the reason their special education rates are lower is because their superior instruction and organization make special education classification necessary. The chart above directly refutes this. It's much less difficult to change the classification of a student with a speech or SLD disability than one with a traumatic brain injury, or blindness, or autism. If charters dissolve classified students' individualized education programs (IEPs) at higher rates than public district schools, it's only because the classified charter students have, on average, less profound disabilities than district students.

I've heard some make the case that school districts often place special needs students in specialized, out-of-district private schools, and that this is functionally no different than allowing students to enroll in charters. But that's a ridiculous argument on its face. When a school board makes a decision about an out-of-district placement, they make the decision, and they figure out how to pay for it. Charter school enrollments, on the other hand, are foisted upon school districts by the state with no ability for the district to approve or regulate the enrollment.

In other words: the state makes the decision to approve a charter school, but the district has to pay for it. Worse, if the district isn't where the charter is located, they don't even have the right to appeal the decision. If students in your town want to enroll in a charter school 20 miles away, you don't get any say in the matter -- your town has to pay for it, no matter the fiscal or educational harm.

And again: those students who enroll are less likely to have special education needs... most of the time:



This table shows the disparity between the charter population and the district population in the proportion of classified students for each population.** In North Plainfield, for example, 18.5 percent of the district's students are classified -- but none of the resident students who attend charters are listed as having a special education need. That disparity is the largest in the state.

But here's what's interesting: there are, in fact, districts where the charter and district student populations have similar proportions of special needs students. In fact, in New Brunswick, more classified students attend the charters, proportionally, than the public district schools. Keep in mind that, as we show above, the charter students in New Brunswick have less costly disabilities. This is a problem because the charter school funding formula treats all classified students, with the exception of students with a speech disability, the same in terms of the funds transferred to charters.

Still, New Brunswick shows that many of the other local charter school sectors could be enrolling more special needs students. So why don't they? Why are so many charter schools not stepping up and enrolling more special needs students -- even those with the least costly learning disabilities?

The charter sector has been promising for some time that it will start educating more children with special needs. Some charters do -- but many clearly do not. And why would they, when the state has refused for years to hold them to account? Why would they, when they could count on renewals and approvals for expansions even though it was obvious they were engaging in segregation by special education need?

During the Christie administration, the state turned a blind eye toward the segregation by special need that accompanies charter school expansion. Yet the data on this are so clear that not even the NJCSA doesn't dispute the truth. The Murphy administration, the NJ Legislature, and the NJDOE have got to start acknowledging this and come up with a plan to address it.

There's another student population NJ's charters have underserved, even more than special needs children: English language learners. We'll discuss that next.


* The data was suppressed in 2015 but not in 2014. I have no idea why. You can tell the data is not suppressed because there are many cells that have values between 1 and 9, even though in other years the cells were suppressed when less than 10.

** In the report, we limit the districts studied to those enrolling at least 50 students.

Thursday, March 15, 2018

The Facts About NJ Charter Schools, Part I: Prelude

This is long overdue:
New Jersey's new governor will consider changes to the state's charter school law, potentially slowing the expansion of controversial, yet in-demand schools championed by former Gov. Chris Christie
The state on Friday announced a "comprehensive review" of its charter school law, fulfilling one of Gov. Phil Murphy's campaign promises after an era of rapid school choice growth.
The next week, Murphy clarified his position:
Gov. Phil Murphy's administration is about to scrutinize charter school law, but that doesn't mean he has it out for charter schools, Murphy said Monday. 
"I have never been nor will I be 'hell no' on charters," the Democratic governor said during a radio appearance on New Jersey 101.5-FM. "I just don't like the way we've done it." 
[..]
"If a school is high performing and kids are doing really well based on an objective set of facts, count me as all in," Murphy said. [emphasis mine]
So we need "an objective set of facts," huh? Well, Governor, I've got just the thing with which to start...

This week, Julia Sass Rubin, Professor at Rutgers University in the Bloustein School of Planning and Public Policy, and yours truly released a new report: New Jersey Charter Schools: a Data-Driven View, 2018 Update. The report was funded by The Daniel Tanner Foundation, which funded our 2014/2015 series of reports on New Jersey charter schools.

If the reaction to this latest report is anything like the reaction to the previous series, you're probably going to see some serious pushback to our work over the next few weeks. So I want to spend the next few posts here going over exactly what Julia and I did in this report, and why we both believe Governor Murphy is correct in wanting to give serious thought to overhauling New Jersey's charter school laws and regulations.

But let me start with an overview:

- New Jersey charter schools are transferred a lot of money away from the public district schools.



This graph didn't make it into the final report, but it's still instructive. Year after year, charter schools are taking a larger share of the state's total school funding. This is highly problematic, as charter schools create redundant systems of school administration. Yet the state has not bothered to take a serious look at what this means for the overall fiscal health of NJ's public school system.

- The effects of charter proliferation in New Jersey are much more widespread than commonly reported.

The discussions around New Jersey charter schools mostly focus on their impacts in places like Newark and Camden. Unquestionably, these are the communities that feel the effects of charter schools growth the most -- but they aren't the only ones. There are charter schools in New Jersey that draw from over 40 different districts, which means the fiscal effects of charter growth are felt in public school districts all over the state.

- NJ charter schools do not enroll as many students with special education needs as public, district schools.



This data actually mirrors similar data presented by the New Jersey Charter School Association. It's a simple fact: the students in the charters are much less likely to be classified as having a learning disability compared to those in the public district schools. It amazes me that anyone would try to argue this point.

- NJ charter schools do not enroll as many students who are Limited English Proficient (LEP) as public, district schools.



Again, it's pointless to argue about this. This is the state's own data, and the pattern is very clear.

- There is wide variation in the differences in student socio-economic status between NJ's charter and district schools.



There are communities where the charter student population has close to the same proportion of free lunch-eligible students as the public school district. But there are many places where the charter population is very different compared to the district school population. In some places, the charters enroll many more FL students; in some places, the charters enroll far fewer FL students. Both of these situations are cause for concern.

It's also worth noting that free lunch-eligibility may be increasingly unreliable of a measure of student socio-economic status. If we care about the segregative effects of charter schools, we need to start collecting better data.

Again, I'll get into these individual points over the next few posts. But let me conclude this introductory post with this thought:

In New Jersey, a local community has no say in whether it has to pay for resident students to attend charter schools. This includes many towns where charters are not located. If a resident family in your school district wants their child to attend a charter miles away in a town that isn't close to yours, your town's taxpayers must still come up with the money to subsidize that "choice."

In other words: The power to approve, regulate, and expand charter schools is not aligned with the fiscal burdens of paying for those charters.  This is a serious problem that must be addressed in any future legislative overhaul.

Much more to come -- stand by...

Tuesday, March 13, 2018

Betsy DeVos's Florida Fantasy

Betsy DeVos's incoherent mess of a interview with Leslie Stahl (who I thought did a good job) on 60 Minutes Sunday night has got to be one of the most embarrassing performances by a sitting cabinet member in modern times.

I'm not sure my stomach could take a complete debunking of all of DeVos's nonsense. But I would like to focus on one exchange:
Lesley Stahl: Why take away money from that school that's not working, to bring them up to a level where they are-- that school is working?
Betsy DeVos: Well, we should be funding and investing in students, not in school-- school buildings, not in institutions, not in systems.
Lesley Stahl: Okay. But what about the kids who are back at the school that's not working? What about those kids?
Betsy DeVos: Well, in places where there have been-- where there is-- a lot of choice that's been introduced-- Florida, for example, the-- studies show that when there's a large number of students that opt to go to a different school or different schools, the traditional public schools actually-- the results get better, as well. [emphasis mine]
Stahl goes on to ask about how choice has worked Michigan -- as well she should. But I'd like to take a minute or two to examine what DeVos thinks she knows about Florida and school "choice."

Any time a policymaker talks about what "the studies show," watch out. Some of them are adept at reading and synthesizing research, but many are not; too often, they let their staffs, who tend to have cursory training in research methods (especially quantitative methods), assemble the evidence so that it matches their ideological predilections.

In DeVos's case, it's been clear for years that she supports school "choice," no matter what "the studies show." DeVos has publicly admitted that her advocacy for school vouchers is driven by her religious faith. In a way, that's not different than Milton Friedman's voucher advocacy, which also was not based on empirical evidence, but rather ideology.

The theory behind school "choice" has largely rested on the notion that competition forces improvements in schools; therefore, if we want to improve public, district schools, we should threaten them with losses of enrollments by introducing "choice" through a market-based system subsidized by taxpayers.

So, what do "the studies show" about school vouchers in Florida? Let's ask Patrick Wolf, writing here with Anna Egalite. Wolf is well-known within education policy circles as one of the foremost advocates for school "choice" in academia:
The competitive effects of Florida’s various voucher programs have been the subject of nine studies. All of them reported that the test scores of students who remained in public schools increased as a result of school choice competition. Although these positive effects of competition on public school achievement tended to be small, they were larger when school choice increased dramatically (Forster, 2008a). [emphasis mine]
I'm going to object to that last phrase. Forster's study -- which was not peer-reviewed, and has some serious methodological weaknesses (some of which are recounted in a review of a related study here) -- is hardly enough evidence to suggest that expanding school choice leads to better outcomes in public schools. Forster's study really does nothing to control for all kinds things aside from vouchers that might explain rising test outcomes.

That said, Egalite and Wolf are right: the effects of competition on Florida public schools were small.

Really small.

Let's take, as our best available example, the latest study on Florida's vouchers they reference: a 2014 peer-reviewed paper by David Figlio and Cassandra Hart. It's a clever piece of econometric work; not airtight by the authors' own admission (what is?), but still well worth considering. The authors exploit the fact that competition from vouchers varied considerably across Florida during the period under study: some schools, for example, have a private school nearby, while others have one further away. Some areas have a variety of private schools; some have only one type (say, evangelical).

Foglio and Hart looked at how this competition varied and correlated with outcomes for public schools. What did they find? If you're not used to reading this sort of research, it might be hard to grasp, so let's break it down:
Every mile the nearest private school moves closer, public school student test score performance in the post-policy period increases by 0.015 of a standard deviation.
The study excluded schools that were more than five miles from a private school, and the average public school was about 1.3 miles from a private school. What this study found was that if you put a private school a mile closer to a public school that already had one within five miles, the test scores at the local public school would increase from the 50.0 percentile to the 50.6 percentile.

Not impressed? Try this:
Adding 10 nearby private schools (just shy of a standard deviation increase in this measure) increases test scores by 0.021 of a standard deviation.
The average public school had 15.4 private schools within five miles. Add 10 more and you'll move the school from the 50.0 percentile to 50.8.
Each additional type of nearby private school is associated with an increase of 0.008 of a standard deviation. Adding an additional 100 churches in a 5 mile radius (a nearly one standard deviation increase) is associated with a 0.02 standard deviation rise in scores, and adding an additional 300 slots in each grade level in a 5 mile radius (just over a 1 standard deviation increase in this measure) increases scores by 0.027 standard deviations. Overall, a 1 standard deviation increase in a given measure of competition is associated with an increase of approximately 0.015 to 0.027 standard deviations in test scores. [emphasis mine]
In other words: increasing the competition measures by what is a very substantial amount results in moving test outcomes from the 50.0 percentile to between the 50.6 and 51.1 percentile. This is the most generous interpretation using this conversion.
While these estimated effects are modest in magnitude, they are precisely estimated and indicate a positive relationship between private school competition and student performance in the public schools even before any students leave the public sector to go to the private sector.
Well, yes, they are precisely estimated -- that's easy to do when you have a really big data set with over 9 million student-year observations.

But these results are not "modest" -- they are tiny. They represent no meaningful educational impact. To say, as DeVos does, that "the results get better" is just not accurate in any practical sense.

Now, voucher proponents could make the case, based on this study, that there is no evidence that the schools got worse. But I think that argument fails for at least a few reasons:

First, test score gains or losses are a very poor measure of whether a public school suffers fiscal stress due to the diversion of funds. Instruction in tested subjects would be the last thing a school district cuts if it's under competitive and accountability pressure. The question is what happens to instruction and programming not related to tests: extracurriculars, arts, history, science, student support services, etc. The truth is, we just don't know.

Second, if "choice" is introduced as a substitute for things like adequate and equitable funding, the overall progress of the system will be impeded. The sad fact is that the "Florida Miracle" has been grossly oversold; the state is a relatively poor performer compared to other states that make more of an investment in public education. Can that all be attributed to policy? No, of course not... but Florida is a state that makes little effort to fund its schools.

In any case, DeVos's contention that public, district schools see improvement when there is competitive pressure is just not held up in any practical sense by research like this. As I said in my last post, the effects sizes of things like this are almost always small. In this case, the effect is exceptionally small; in practical terms, it's next to nothing.

The idea that we're going to make substantial educational progress by injecting competition into our public education system just doesn't have much evidence to support it. I wish I could say that conservatives like DeVos were the only ones who believe in this fallacy; unfortunately, that's just not the case. Too many people who really should know better have put their faith in "choice," rather than admitting that chronic childhood poverty, endemic racism, and inequitable and inadequate school funding are at the root of the problem.

As always: I'm not saying we can't and shouldn't improve our public schools right now as best as we can. But DeVos's policy of expanding Florida-style school choice as a way of improving public schools makes as little sense as her policy of arming teachers to improve school safety. Neither policy has any empirical support, because both are clearly illogical.

School "Choice's" Best Friends

Monday, March 5, 2018

Things Economists Should Start Saying About Education Research

If there's one thing I find helpful about Jonathan Chait's work, it's that every now and then he gives us a "State Of The Reformy" piece that serves as a useful encapsulation of the current arguments among the neoliberal set for education "reform."

Chait's piece this time is especially notable because it's an explicit attempt to distance the Obama administration's education policies from those being pushed by the conservatives who have been emboldened by Donald Trump's win and, subsequently, Secretary of Education Betsy DeVos's rise to power.

Desperately, Chait wants to convince us that the agenda pushed by Arne Duncan and John King, Barack Obama's SecEds, represented some sort of middle ground between the hard-right's dream of privatizing education, and the left's indifference to the "failure" of American schools (allegedly a direct result of the vast influence and vast perfidy of teachers unions).

Chait's political argument is so silly it's almost not worth addressing: thankfully, Peter Greene, once again, does most of the work so I don't have to. The fact is that Chait makes sweeping generalizations about the left (and, for that matter, the right) that are absurd [emphases mine]:
"Left-wing policy supports neighborhood-based public schools, opposes any methods to measure or differentiate the performance of teachers or schools, and argues instead for alternatives to school reform like increased anti-poverty spending or urging middle-class parents to enroll their children in high-poverty schools."
And:
"Unions that oppose subjecting their members to any form of measurement joined forces with anti-government activists on the right to protest Common Core and testing."
Chait, of course, gives no examples of unions not wanting to hold teachers accountable for their practice -- because the notion is nonsense. The unions have never -- never -- held the position that bad teachers should be allowed to continue to teach without any remediation or consequence. What they have insisted, quite correctly, is that there be due process in place as a check against abuses of power, so that the interests of students and taxpayers, as well as teachers, can be protected.

Now, as much fun as it might be to knock down all of Chait's straw men, I'd like to instead focus on something else from his piece. Because Chait, like all education policy dilettantes, likes to dress up his arguments with references to education research -- specifically, research conducted by economists. Throughout his piece Chait includes links to a variety of econometric-based research, all purporting to uphold his claims for the efficacy of reformy policies.

I have no problems with economists. Literally, some of my favorite people in the world are economists. And I enjoy a good regression as much as the next guy. Useful work has been done by economists in the education field. I can honestly say my thinking about things like charter schools and teacher evaluation has been shaped by my study of econometric research into those topics.

However...

It has been my observation over the years that economists working in education have not been as forthcoming as they should about the limitations of their work. And this has led to pundits and policymakers, like Jonathan Chait -- and, for that matter, Arne Duncan and John King -- to draw conclusions about education "reform" that are largely unsupportable.

Chait's piece here is an excellent example of this problem. So allow me to take a pointed stick and poke it into the econometric beehive; here are some things everyone should understand about recent research on things like charter schools and teacher evaluation that too many economists never seem to get around to mentioning.

* * *

- Charter school lottery studies are not "perfect natural experiments." The economists who conduct these studies are often quite eager to tout them as "exactly the research we need" to make policy decisions about the effects of charter school proliferation. I am here to tell you in no uncertain terms: they are not.

The theory behind charter lottery studies is that the randomization of the lottery controls for all unobserved (better understood as unmeasured) differences between students that might account for differences in the effects being studied. In the case of charter schools, we might assume (quite correctly) that different parents approach enrolling students into charters in different ways.

Parents who care more about their child's outcomes on tests, for example, may be more likely to enroll their child in a charter school if their local public school has low test scores. These parents may be more diligent about making sure their child completes homework or attends school, which could lead to higher test scores.

The economists who conduct these studies are assuming, because assignment to charter schools is random in lotteries, that the differences in these unobserved characteristics of students and their families will be swept away by their experiment. There are some other assumptions built into this framework, but it's generally a reasonable theory...

Except it only applies to students who enter the lottery. If students who enter charter lotteries under one set of conditions differ from students who enter other another -- and there is plenty of reason to believe that they do -- we can't generalize the findings of a charter lottery study to a larger population. In other words: even if we find an effect for charter schools, we can't know that effect will be the same if we expand the system.

Further, we can only generalize the results of lottery studies to charter schools that are popular enough to be oversubscribed. In other words: if there's no lottery because student enrollment is low, we can't conduct the experiment. In addition, we are starting to get some evidence that charters, which have redundant systems of school administration and often can't achieve economies of scale, are putting fiscal pressure on their hosting public district schools.

While these points are sometimes mentioned in the academic, peer-reviewed papers based on these studies, I rarely see them acknowledged when economists discuss these studies in the popular press. Nor do I see any discussion of the fact that...

- Studies of education "reforms" often have very fuzzy definitions of the treatment.  The treatment is, broadly, the policy intervention we care about. For charter schools, the treatment is taking a school out of public district control and putting it under the administration of a private, non-state actor entity. The problem is that there are often differences between charters and public district schools that are not what we care to study, and these differences often get in the way of the things we do care about.

Here's a completely hypothetical pie chart from an earlier post. Let's imagine all the differences we might see between a charter school and a counterfactual public district school.


We know charter school teachers generally have a lot less experience than public district school staff, which makes staffing costs cheaper. There is almost certainly a free-rider problem with this, leading to a fiscal disadvantage for public district schools. But that's good for the charters: they can extend their school days and still keep their per pupil costs lower than the public schools.

But is this a treatment we really want to study? Shouldn't we, in fact, control for this difference if what we want to measure is the effect of moving students to schools under private control? Shouldn't we control for peer effects and attrition and resource differences if what we really care about is "charteriness"?

The economists who conduct this research often refer to their treatment as "No Excuses." What they don't do, so far as I've ever seen, is document the contrast the implementation of their treatment between charter schools and counterfactual public district schools. In other words: do we really know that "No Excuses" varies significantly between charters and public schools?

A lot of the research into charter school characteristics is, frankly, cursory. Self-reported survey answers with a few videos of a small number of charters in one city is not really enough qualitative research to give us a working definition of "No Excuses" -- especially when there's no data on the contrasting schools that supposedly don't adhere to the same practices.

So, no, we can't attribute the "success" of certain charter schools to their practices or organization -- at least, not based on these econometric studies. And we really need to step back and think about what we're using to define "success"...

- Test scores have inherent problems that limit their usefulness in econometric research. I keep a copy of Standards for Educational and Psychological Testing within arms length when I review education research that involves testing outcomes. And I have Standard 13.4 (p. 210) highlighted:
Evidence of validity, reliability, and fairness for each purpose for which a test is used in a program evaluation, policy study, or accountability system should be collected and made available.
This is a process that has been largely ignored in much of the econometric research presented as evidence for all sorts of policies. The truth is that standardized tests are, at best, noisy, biased measures of student learning. As Daniel Koretz points out in his excellent book, The Testing Charade, it is quite easy to improve test scores by giving students strategies that have little to do with meaningful mastery of a domain of learning.

Koretz also notes that multiple charter school leaders have explicitly said that improving test scores is the primary focus of their schools' instruction. As Bruce Baker and I note, there is at least some evidence that these improvements came at the expense of instruction in non-tested domains. That lines up with a body of evidence that suggests increased accountability, tied to test scores, has narrowed the curriculum in our schools.

I'm the last person to say we shouldn't use test scores to conduct research. But when the test score gains that econometric research shows are marginal, we should all stop and consider for a bit whether we're seeing gains that represent real educational progress. And many of these studies show gains that are quite marginal...

- Compared to the effects of student background characteristics -- especially socio-economic status -- the effect sizes of education "reforms" are almost always small. Student background characteristics are by far the best predictors of a test score. We know for a fact that poverty greatly affects a child's ability to learn

The claim of reformers, however, is often that education can be a great equalizer, leading to more equitable outcomes in social mobility. Over and over, the charter sector has claimed they are "closing the achievement gap," implying the education they offer is equivalent to the leafy 'burbs and that their students, therefore, are overcoming the massive inequities built into our society.

I made this chart a while ago, which compares the effects of charter schools, as measured by the vaunted CREDO studies, to the 90/10 income achievement gap:


The income achievement gap has actually been growing over the years; it now roughly stands at 1.25 standard deviations. No educational intervention I have seen studied using econometric methods comes close to equaling this gap. As Stanley Pogrow notes, economists seem to be all too happy to have the effect sizes they find declared practically meaningful, when often there is little to no evidence to support that conclusion.

One of the arguments made by some researchers is that these effects are cumulative: the intervention keeps adding more and more value to a student's test score growth, so that, eventually, Harlem and Scarsdale meet up. Except, as Matt DiCarlo points out in this post, you really shouldn't do that -- at least, you should only do so after pointing out that you're only making an extrapolation.

This is often where economists get into trouble: trying to translate their effects into more understandable terms...

- The interpretation of effect sizes into other measures, such as "days of learning," is often highly questionable. The CREDO studies have led the way in translating effect sizes into layman's terms -- with indefensible results. As I've pointed out previously, the use of "days of learning" in this case is wholly invalidated, if only because there is no evidence the tests used in the research have the properties necessary for conversion into a time scale. And the documentation of the validation of this method is slipshod -- it's really just a bunch of links to studies which in no way validate the conversion.

Recently, a study came out about interventions in Newark's schools and their effects on test scores. As I note in the review I did with Bruce Baker (p. 27), the effect size found of 0.07 SD was compared to "the impact of being assigned to an experienced versus novice teacher." But that comparison was based on a single study, by one of the authors, which compared teachers in Los Angeles who had no experience to those who had two years. This is hardly enough evidence to make such a sweeping statement.

In another interpretation, 0.07 SD moves test scores for a treatment group from the 50th to the 53rd percentile. Small moves like these are very common in education research...

- The influence of teachers on measurable student outcomes is practically small. I am a teacher, and I think what I do matters. I think I make a difference in the lives of my students in many ways, most of which can't be quantified. 

But in the aggregate, there is little evidence teachers have anywhere near the effect on student outcomes as out-of-school factors. As the American Statistical Association notes, teachers account for somewhere between 1 and 14 percent of the variation in test scores -- and we're not even sure how much of that is really attributable to the teacher.

One study that is cited again and again to show how much teachers matter is Chetty, Friedman, Rockoff. It's a very clever piece of econometric work, but in no way does it show that having a "great" teacher will change your life. Its effects have been run through the Mountain-Out-Of-A-Molehill-Inator to make it appear that teacher quality can have a profound influence on students' income later in life. But what it really says is that you'll earn $5 a week more in the NYC labor market when you're 28 if you have a "great" teacher (the effect if you were 30 is not statistically significant). 

Am I the only one who is underwhelmed by this finding?

* * *

Again: I have no objection to using test scores as variables in quantitative research designs. I will be the first to say there is evidence that policy interventions like charter schools in Boston or teacher evaluation in Washington D.C.* show some modest gains in student outcomes. It's valuable to study this stuff and use it to inform policymaking -- in context.

But simply showing a statistically significant effect size for a certain policy is not enough to justify implementing it. Some economists, like Doug Harris in this interview, make a point of stating this clearly. In my opinion, however, what Harris did doesn't happen nearly enough -- which leads to pieces like Chait's, where he clearly has no idea about the many limitations of the work he cites.

The question is: Whose fault is that? Have the researchers who inform our punditocracy's view of education policy done enough to explain how those pundits should be interpreting their findings?

Chait and others like him have the final responsibility to get this stuff right. But economists also have a responsibility to make sure their work is being interpreted in valid ways. I respectfully suggest that it's time for them to start taking some ownership of the consequences of their research. Explaining its limits and cautioning against overly broad interpretations would go a long way toward having better conversations about education policy.



* What they don't show is that student learning improved after a new teacher evaluation system was put in place. More on this later...