Fredrichs' arguments against agency fees never made any sense, but she never, at least as far as I read, made up statistics to bolster her point of view. Alas, she's now spending her time promoting school vouchers, likely in anticipation of the DeVos privatization parade that's marching into Washington.*
And she's engaging in statistical malpractice of the worst order to sell school voucher schemes.
Here's a slick little video from the folks at PragerU featuring Friedrichs, where she explains that students who use school vouchers -- which allow families to use taxpayer monies to pay private school tuition, including religious schools -- get better academic results than if they attended their local public school.
According to researchers at the University of Arkansas – in the most comprehensive study done to date -- students in school choice programs saw their reading and math scores improve by 27 percent and 15 percent, respectively.
Sounds like something we should get behind, doesn’t it? But for millions of families in my home state of California and in many others, school choice is not a choice. [emphasis mine]
These numbers are alternative facts -- Friedrichs and the people who made this video got this completely wrong.
How do I know? Simple: The UArk researchers admit their work was misrepresented.
I never got an answer back. Let me explain what's going on here:
When someone says "Test scores improved by X percent!" you have to understand that doesn't mean a thing unless you know the range of scores and where the starting points were.
Think about a test with scores from 0 to 100. If you got a 10 on our first try, and your score improves by 15%, you now get an 11.5, an improvement of 1.5 points. If you start with a 50, however, a 15% improvement raises you to a 57.5; that's an improvement of 7.5 points. Is that "more" improvement? Not necessarily; maybe it's really hard to break through a score of 10, and really easy to get more points higher on the scale.
The point is that a "15% improvement" doesn't mean anything without context. So what researchers usually do is state their effect sizes in standard deviations. This is exactly what the UArk folks did in their study: .27 SD in reading and .15 SD in math. So how do we interpret this?
One way is to think about a normal, "bell curve" distribution of scores and how it would be divided up into percentiles:
This graphic is a little complex (but it's free, so...). See the second row from the top that says "Standard Deviations"? That's what we're talking about here. If you started right at the middle of the curve and improved by a quarter of a standard deviation, you'd move like this:
Using one of the many handy SD-percentile calculators on the web, that .27 is like moving from the 50th to the 61st percentile. The .15 is from the 50th to the 56th percentile. That's an improvement -- but to a layperson, I'm guessing it's a lot less impressive than Friedrichs' figures. Friedrich's video completely misstates the effect sizes of school vouchers, arguably in a way that makes them appear larger than they are.
Before I go any further, let me point out a big limitation of this study. You see, the UArk folks did what's called a "meta-study," which basically gathers up the results from a bunch of other studies and kinda-sorta averages them out to get one result. Their formal working paper goes into all the details.
I won't re-litigate all the methodological concerns here, as Christopher Lubienski has already reviewed the report and found some rather serious concerns. But I will point out this:
The effects the UArk study found mostly disappear when only considering voucher programs in the United States. From page 31 of the report:
Figure 1 presents the global ITT reading impacts. The offer of a voucher has a statistically significant and positive impact of about 0.17 standard deviations [95% CI: 0.15, 0.20]. This overall effect is driven by four programs that had positive effects with 95% confidence (one in the US and three outside of the US). Comparing the six US and three non-US programs that we had reading impacts for, we see that the US programs had an overall effect that was barely a null effect, but tended towards a positive effect [95% CI: -0.00, 0.08]. On the other hand, the programs outside of the US had a more definitive positive impact on reading scores of 0.24 standard deviations [95% CI: 0.21, 0.27]. [emphasis mine]The effect of 0.04 given in the figure moves you from the 50th to the 52 percentile. In math, the US effect size drops to 0.07: from the 50th to the 53rd percentile.
Of course, what you can't determine in a study like this -- even when taking about these small practical effects -- is why you got the gains. Is it peer effects, which are real? Hard to replicate those: this isn't Lake Wobegon where all the children are above average (see Bruce Baker for more on this).
But does it really matter? Even in a meta-analysis with some really questionable biases, the practical effects of voucher programs on test scores in the United States are very, very small.
Look, I'm willing to have a conversation about school vouchers. There might be some positive effects from private school enrollment that aren't found in test scores. But we have to weigh those against the harm vouchers programs might do to public schools, particularly since there's at least some evidence that voucher students would have attended private schools even without taxpayer funds -- which means vouchers create an extra financial burden on the system. And that money's got to come from somewhere...
But we're never going to have a serious conversation about this so long as the voucher industry is willing to put folks like Rebecca Friedrichs in front of the public to spout a bunch of alternative facts. Instead, we'll get a lot of blather about "choice" from people who appear to have no idea what they are talking about.
Let's ask a little better of ourselves, shall we?
"Again I tell you, it is easier for a camel to go through the eye of a needle than for a rich person to enter the kingdom of God." - Matthew, 19:24.
ADDING: Hunt around a little on the PragerU site. Among other things, you can watch a piece that asks: "Is Islam a religion of peace?"
Seriously.
Are all of you "social justice reformers" OK with working with these kind of folks?
* Assuming, of course, that DeVos actually gets confirmed. At this point, we've seen enough cowardice from Republicans in the face of Donald Trump's screaming incompetence and bigotry to assume she'll squeak in. I'd love to be wrong, though...
Thanks for this. I've added a link to it in my piece on the same video clip. Note that she now has her own page on the ALEC website. Guess she's found more lucrative work than teaching.
ReplyDelete