The PARCC-skeptical side was represented by Wendell Steinhauer, president of the New Jersey Education Association, the state's largest teachers union. The pro-PARCC side was argued by Dr. Sandra Alberti, former director of Academic Standards for the NJ Department of Education.
I thought Alberti's answers were quite representative of the pro-PARCC arguments I hear from a variety of sources. So I'd like to focus in on one particular statement she made at 49:50 into the debate. Steinhauer had made the point that the over-emphasis on standardized testing was leading to a narrowing of the curriculum, where test prep had taken over too much time and focus. Alberti wasn't buying it:
(49:50) DR. SANDRA ALBERTI: When it's an assessment [the PARCC] that actually asks you to think, test prep looks like we're thinking in schools every day, and those are the schools I want for my children, those are the schools I want for this nation.
Test prep, the way it's designed for PARCC, that kind of drill is ineffective; it won't work for PARCC. There are lots of high-performing charter schools who are in a panic right now because their typical systems of give a standard drill, test, drill, test, doesn't work.
The whole design of this assessment is to be an assessment worthy, worthy of instructional imitation and that's what we want. I don't want one minute wasted on test prep if it's a test not worth knowing.
These are tests that are worth it, and I get it that we don't have enough transparency about this. I get it that I'm selling like this fairy tale of what a test could be. But I think it's too early to pass this judgment based on some released items that will never make it on a real test, they weren't field tested like the real items were. They're a communication piece. They're not the real test.
This is a test that I can tell you, as somebody who has read every single NJASK and HSPA item that we have given, generations better than what we've ever seen for our kids.
So if our argument is no standardized testing, that's one thing. But if it is standardized testing with a test that is of quality, we have never had a better opportunity in our state or in our country than we have right now.
As I said: I think Alberti's presentation here is very typical of the pro-PARCC side -- both in content and in tone. So I think it's well worth breaking down:
- If the PARCC isn't really like the practice test items released on the parcconline.org website, what in the hell is the point of releasing them?!
Over and over, the pro-PARCC side told us that these tests were going to be the most transparent assessments we've ever had. The PARCC consortium has bragged about how many items they are going to release after the tests. They themselves said the sample items they released have been thoroughly vetted:
Again: this is PARCC themselves saying the sample items are representative. Now, all of a sudden, we're supposed to believe they're not?Washington, D.C. - August 19, 2013 - The Partnership for Assessment of Readiness for College and Careers (PARCC), a 19-state consortium working together to create next generation assessments, today released additional sample items for both English language arts/literacy and mathematics. The sample items show how PARCC is developing tasks to measure the critical content and skills found in the Common Core State Standards (CCSS). The sample items have undergone PARCC's rigorous review process to ensure quality and demonstrate the content that will be on the assessments in 2014-2015.
"We are developing a high quality next-generation assessment system to measure the content and skills students need to succeed at the next level and in college and career," said Massachusetts Elementary and Secondary Education Commissioner Mitchell Chester, chair of the PARCC Governing Board. "The sample items we are releasing today will help to provide a clear signal to educators, parents, and others about the rigor of the test." [emphasis mine]
At this point, we've had lots of parents and teachers go to "Take the PARCC" events at their schools. Many more have gone to the PARCC website to view the items. And many, many stakeholders -- on the basis of samples the PARCC people themselves say are representative -- have come away genuinely wondering how anyone could think these tests were in any way "better" than previous standardized tests.
If Alberti or anyone else would like to make the case that the PARCC is really, truly better, they'd do well to start with explaining how the items we all can see and judge for ourselves provide any evidence for their claims.
- There is no evidence test prep for the PARCC is a superior form of instruction compared to test prep for any other standardized test.
I won't claim special expertise in learning and cognition theory, but I've had enough training and experience to know that what we're really dealing with on standardized tests like the PARCC is what is often referred to in education research as "transfer."
When a student transfers learning, they are finding ways to take their previous learning and apply it to new contexts. When a student takes the PARCC, for example, we are looking to see that student take their previous knowledge of language arts or math and use it to solve a novel problem. In other words (and this is a gross simplification): if we've taught a kid that 5 bags with 3 apples each is 15 apples, will he know how to divide those 15 apples into only 3 bags? Can we take a concept the child has learned and get him to use in in a unique way?
Again, I claim no authority in this area, but I do know this for sure: transfer is really, really tough. The context in which learning takes place matters enormously; further, the proposition that the measure of a student's ability to transfer knowledge is reflective of his teacher's or school's ability to deliver instruction is highly dubious.
One of the claims about the "failure" of public education that always cracks me up is when some corporate stooge gets up in front of a crowd and tells some story about how he can't find workers to do his particular task in his particular way. It never occurs to these guys that the schools can't possibly anticipate every context in which learning may need to be transferred. They want workers who can walk into their businesses immediately and start doing their jobs; sorry, but that's just not reasonable. You have to train people in the context in which you expect them to perform.
Which gets us back to test prep. If you really want your students to pass a test, you're going to instruct them on how to pass it. You're not going to waste time instructing them in different contexts; the only context you care about is the one that matters.
The PARCC is a test; it is its own context and, as such, its results are subject to limitations on transfer as much as any other test. Pretending it mirrors the "real world" better than any other standardized test flies in the face of everything we know about human learning and common sense. Which bring me to my last point (for now):
- We don't even know if the PARCC is measuring the effectiveness of instruction.
Give the PARCC folks credit: they published a brief that admits they really don't know if the PARCC is "instructionally sensitive" or not:
One area of interest as the PARCC assessments are developed and implemented is instructional sensitivity. In particular, PARCC is interested in considering studies to examine the instructional sensitivity of items and assessment components as the CCSS become more widely and effectively implemented over time.
You can click through to see the graph (and welcome to my world these days). But I'll summarize: the relevant question is whether the PARCC is reflecting school-based effects or something else. And the answer is: "we don't know."At one level, instructional sensitivity has a relatively straightforward definition. According to Popham (2007), “A test’s instructional sensitivity represents the degree to which students’ performances on that test accurately reflect the quality of the instruction that was provided specifically to promote students’ mastery of whatever is being assessed” (p. 146). In a recent in- depth article on instructional sensitivity in Educational Measurement: Issues and Practice, Polikoff (2010) traced the meaning and origins of instructional sensitivity, connecting it to instructional alignment, opportunity to learn, and the criterion-referenced testing movement that began in the 1960s. Chen (2012) also summarized the relevant literature, and based on that review, provided a graphical depiction of the relationship between instructional sensitivity, instructional validity and curricular validity shown in Figure 1. (emphasis mine)
Morgan Polikoff and I have tangled a bit on social media (he didn't much care for my bluntness in calling institutional racism what it is). But he is a smart guy and he makes a very good point here:
The success of schools under standards-based reform depends on the performance of students on state assessments of student achievement. The reform’s theory of change suggests that, with high-quality content standards and aligned assessments, teachers will modify their instruction to align to the standards and assessments, and achievement will rise (Smith & O’Day, 1990). Dissatisfied with the performance of the No Child Left Behind Act (NCLB) to this point, researchers and policymakers have begun to advocate for and work toward common standards and assessments that would, presumably, be better aligned and more likely to encourage teachers to modify their instruction. However, there has been little discussion of the last part of the theory of change—that changes in instruction will result in changes in performance on assessments of student learning (Popham, 2007). Indeed, though millions of dollars are spent on developing and administering assessments of student learning each year across the 50 states (Goertz, 2005), there is little evidence that these assessments can detect the effects of high-quality instruction. Instead, it is assumed that assessments aligned to standards through one of the various alignment procedures will necessarily be able to detect instructional quality. This key omitted property of an assessment is called its instructional sensitivity, and it is the focus of this paper. [emphasis mine]This is an interesting read and well worth some quality nerding-out time. Polikoff's point is that if we're going to use the test to judge instruction, we ought to have some idea as to whether the results vary as instruction varies.
What this PARCC memo above is tacitly admitting, however, is that we have no evidence that the PARCC measures the efficacy of instruction. In discussing avenues for future research, the memo is in effect admitting we have a gap in our knowledge about the PARCC: we don't know whether its results are sensitive to instruction. And yet the pro-PARCC folks keep making claims as to its superiority as an accountability assessment. Wouldn't it make sense to verify this claim before making it?
Look, I'm not trying to single out Alberti; she's only repeating what all the pro-PRACC types have been saying. Over and over they tell us PARCC is a "better" test, measuring "real" learning. But the truth is there is little to no evidence to back up the claim that the PARCC is a "better" test.
When a parent goes to the PARCC website or a "Take The PARCC" and comes away with doubts as to whether these tests are appropriate and necessary, she should be met with a better response than "Trust us!" The burden of proof is on the pro-PARCC faction. Either back up your claims, or stop making them.
First, we need to test the test to see if it's ready to test you.