The fine folks at NCES look at the proficiency levels set on the various state tests across the nation, then map those levels on to the National Assessment of Educational Progress, or NAEP. Since a representative sample of students across the nation take the NAEP, we can look at what a student deemed "proficient" gets as a minimum score on the national test in one state, and compare it to what a student in a different state would have to get to be "proficient" there.
Folks who complain about the "honesty gap" seem to think it's a big deal that states not "lie" to their students:
Parents deserve the truth. But unfortunately, in most states there is a significant gap between the NAEP scores and what states report as their proficiency rate. This “Honesty Gap” is not new and something many states acknowledged years ago.
We are on the right road to fixing this problem. Today, many states are mid-stream in taking the steps needed to address the Honesty Gap – mainly, the adoption of rigorous, comparable standards and high-quality assessments that give parents real information.
We can’t go backwards. Opponents of Common Core and high quality tests want to take states and the country backward. They offer no alternative plan to changing the trajectory of this data, the fact that parents don’t have the right information and that we are graduating kids that are not prepared for success in life.Uh-huh. Gosh, thanks for not putting all the blame on teachers...
There is lots of blame to go around. Parents should not simply blame educators for the Honesty Gap – politicians have played a huge part in creating it. Both elected officials and some in the education establishment have not had the political courage to be honest and forthright with parents. And our kids have been the collateral damage.
The empirical question I'm asking here is simple: do states with higher proficiency levels get better test-based outcomes for their students? I suppose one could argue that it takes time for states to adjust, and that New York, for example, will have to wait a few years before any positive effects of jacking up their proficiency rates to absurdly high levels yields any benefits.
Still, it's reasonable to think that, were this reformy theory true, some sort of pattern would emerge, even with a few outliers. Most states with standards mapped to high scores on the NAEP should, in this construction, perform better on the NAEP. And states with low proficiency standards should, in theory, not perform as well.
But, as I showed last time, this is not the case (click to enlarge the graphs):
This is the graph of the mapped proficiency score (x-axis) against the actual mean scale score (y-axis) on the NAEP for Grade 8 reading. The NAEP also tests math, and in Grade 4; go back to the original post to see those graphs. What you'll see in every case is a "cloud" of points, which means there is no meaningful correlation between the mapped proficiency score a state sets and its actual performance on the test.
Again: maybe it will take time for this correlation to emerge... except states have been setting different standards for "proficiency" for years. If it's so important to close the "honesty gap," why don't states with higher proficiency standards get higher scores?
Another rebuttal to my point might be this: the students who benefit from high standards are those who are traditionally "left behind": students in economic disadvantage, and who are members of traditionally underserved racial classifications. OK...
That's easy enough to test; here's grade 4 reading for students who are eligible for the federal free or reduced-price lunch (FRPL) program, a proxy measure of economic disadvantage:
Again, a "cloud": there's no correlation between the mapped proficiency score and the actual score.
Now, a legitimate critique of what I've done here is that the population of FRPL students are different in different states. The average income for FRPL families may be lower in one state than another, giving an advantage to some states over others.
Luckily, I was able to borrow some data from you-know-who to address this. If we regress free-lunch (FL) scores on the average income for the FL population in a state, we should be able to account for these differences. So here's the same graph, but with adjusted FL scale scores in Grade 4 reading*:
Again, there's no correlation. Grade 8 reading:
Grade 4 math:
Grade 8 math:
As I said in the last post, the correlation with math is weak but statistically significant. However, most of it seems to be at the bottom of the mapped proficiency rates; remove a few data points and it pretty much disappears.
What about racial categories? Here's math for black students in Grades 4 and 8:
Reading shows the same results, as do Hispanic students:
Let me say this again: I don't think setting reasonably high standards is a meaningless policy. We have to set them somewhere, so we may as well set them correctly. And there is a case to be made some of the states with very low proficiency standards would do well to boost them, particularly in math.
But if we're really concerned about improving student achievement, setting proficiency standards should be a minor concern. There isn't any empirical evidence that supports the idea that the "honesty gap" is a major factor in determining test-based outcomes.
Can't we be honest about this?
Another child falls victim to the "honesty gap."
* Expressed in standard deviations above or below prediction.
Dr. Geralk Bracey called for returning to NAEP, save the precious dollars for what really matters
ReplyDelete