The research on which the Los Angeles Times relied for its teacher effectiveness reporting was demonstrably inadequate to support the published rankings. Using the same L.A. Unified School District (LAUSD) data and the same methods as the Times, Derek Briggs and Ben Domingue of the University of Colorado at Boulder probed deeper and found the earlier research to have serious weaknesses making the effectiveness ratings invalid and unreliable.
Last August, the Times published the results of a “value added” analysis of student test data, offering ratings of elementary schools and teachers in the LAUSD. The analysis was conducted by Richard Buddin, a senior economist at the RAND Corporation, as a project independent of RAND itself." He found significant variability in LAUSD teacher quality, as demonstrated by student performance on standardized tests in reading and math, and he concluded that differences between “high-performing” and “low-performing” teachers accounted for differences in student performance.
I'm sure the fine folks at the LA Times will be issuing both retractions of their incredibly incendiary and irresponsible reporting, and refunds to the advertisers who supported this dreck because the LAT convinced them that it would bring eyeballs to their website.Yet, as Briggs and Domingue explain, simply finding that a value-added model yields different outcomes for different teachers does not tell us whether those outcomes are measuring what is important (teacher effectiveness) or something else, such as whether students benefit from other learning resources outside of school.
As for the teachers:
Seriously - the union should sue. This is a disgusting breach of journalistic ethics and the LAT should pay a heavy price for it.Next, they developed an alternative, arguably stronger value-added model and compared the results to the L.A. Times model. In addition to the variables used in the Times’ approach, they controlled for (1) a longer history of a student’s test performance, (2) peer influence, and (3) school-level factors. If the L.A. Times model were perfectly accurate, there would be no difference in results between the two models. But this was not the case. For reading outcomes, the findings included the following:• More than half (53.6%) of the teachers had a different effectiveness rating under the alternative model.• Among those who changed effectiveness ratings, some moved only moderately, but 8.1% of those teachers identified as effective under our alternative model are identified as ineffective in the L.A. Times model, and 12.6% of those identified as ineffective under the alternative model are identified as effective by the L.A. Times model.
The math outcomes weren’t quite as troubling, but the findings included the following:• Only 60.8% of teachers would retain the same effectiveness rating under both models.• Among those who did change effectiveness ratings, some moved only moderately, but 1.4% of those teachers identified as effective under the alternative model are identified as ineffective in the L.A. Times model, and 2.7% would go from a rating of ineffective under the alternative model to effective under the L.A. Times model.
And then maybe politicians would give second thought to implementing such an unreliable method for determining layoffs and merit pay. Not because they care about teachers so much as they care about not getting their asses sued off.