Pages

Friday, June 29, 2012

Rhee Gets Drill & Kill Wrong

I wrote earlier this week about the drubbing Michelle Rhee got on the BBC from the British teachers union leader, Mary Bousted.

One of the difficulties in debating Rhee is that when she opens her mouth and the spin comes flying out, it's hard to know what to debunk first; I mean, there's just so much of it, so where do you start? But here's one claim she made that Bousted never got to rebut I'd like to put to rest right now:
(3.24) The research is very clear that teachers who teach to the test actually... don't... their kids don't do better academically. The kids who do the best academically, on tests, as measured by tests, are the teachers who teach a broad base of skills... and critical thinking skills and analytics. So teachers who are really paying attention to what works are never going to teach to the test.
Hmm... what research is that? Well, in this video, Rhee tells us:


(3:40) The Gates Foundation put out a study about a year ago that showed that teachers who teach to the test, meaning they do the drill-and-kill with the kids, actually those kids do not do as well on the test, don't do as well academically, as those teachers who teach high order thinking skills, and critical thinking and analytical skills.

Rhee is clearly talking about the Gates MET Project, which came out in 2010. She is claiming that the study shows that teachers who do not drill-and-kill have students who do better on standardized tests.

Is it true? Do teachers who don't train their students to pass bubble tests get better results on those tests? Has common sense been suspended?

The New York Times reported on these findings Friday and repeated the following strong claim:
But now some 20 states are overhauling their evaluation systems, and many policymakers involved in those efforts have been asking the Gates Foundation for suggestions on what measures of teacher effectiveness to use, said Vicki L. Phillips, a director of education at the foundation.
One notable early finding, Ms. Phillips said, is that teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains than those who simply work their way methodically through the key concepts of literacy and mathematics. (emphasis added)
I looked through the report for evidence that supported this claim and could not find it.  Instead, the report actually shows a positive correlation between student reports of “test prep” and value added on standardized tests, not a negative correlation as the statement above suggests.  (See for example Appendix 1 on p. 34.)
The statement “We spend a lot of time in this class practicing for [the state test]” has a correlation of  0.195 with the value added math results.  That is about the same relationship as “My teacher asks questions to be sure we are following along when s/he is teaching,” which is 0.198.  And both are positive.
It’s true that the correlation for “Getting ready for [the state test] takes a lot of time in our class” is weaker (0.103) than other items, but it is still positive.  That just means that test prep may contribute less to value added than other practices, but it does not support the claim that  ”teachers who incessantly drill their students to prepare for standardized tests tend to have lower value-added learning gains…”
In fact, on page 24, the report clearly says that the relationship between test prep and value-added on standardized tests is weaker than other observed practices, but does not claim that the relationship is negative:
The five questions with the strongest pair-wise correlation with teacher value-added were: “Students in this class treat the teacher with respect.” (ρ=0.317), “My classmates behave the way my teacher wants them to.”(ρ=0.286), “Our class stays busy and doesn’t waste time.” (ρ=0.284), “In this class, we learn a lot almost every day.”(ρ=0.273), “In this class, we learn to correct our mistakes.” (ρ=0.264) These questions were part of the “control” and “challenge” indices. We also asked students about the amount of test preparation they did in the class. Ironically, reported test preparation was among the weakest predictors of gains on the state tests: “We spend a lot of time in this class practicing for the state test.” (ρ=0.195), “I have learned a lot this year about the state test.” (ρ=0.143), “Getting ready for the state test takes a lot of time in our class.” ( ρ=0.103) [second and third emphasis mine]
Apparently, Jay Greene got some crap for pointing out that the Gates study does NOT say what Rhee claims it says: there most certainly is a positive correlation between test prep and test scores. May I also point out that surveying 13-year-olds about their teachers' practices probably isn't the most accurate way to gauge whether the teachers are actually drilling-and-killing or not; after all, it's not like the kids have had many other teachers to judge their teachers against.

What it all comes down to is this: teachers teach the test when the test tests teachers. Duh.

I've asked this before, and I'll ask it again: why does anyone listen to anything Michelle Rhee has to say?


Oh, yeah, right...

2 comments:

Sorry, spammers have forced me to turn on comment moderation. I'll publish your comment as soon as I can. Thanks for leaving your thoughts.