I will protect your pensions. Nothing about your pension is going to change when I am governor. - Chris Christie, "An Open Letter to the Teachers of NJ" October, 2009

Thursday, October 3, 2013

Another Reformy Practice Not Grounded in Research: SGOs

Like all teachers around New Jersey, I have been attending workshops and working with my administrators and colleagues on a new, reformy project: Student Growth Objectives. My SGO will count for 15% of my annual evaluation - a percentage mandated by state code - because I teach an "untested" subject: in other words, I don't teach math or language arts between 3rd and 8th Grade, just like somewhere around 80% of all teachers.

In my SGOs, I have to demonstrate "student growth" using a rubric approved by my supervisor, following a format informed by the state Department of Education. And like every other scheme dreamed up by Chris Cerf's NJDOE, the department insists there is a research base for all this:

State officials stressed that among the greatest values in the process may be teachers working with their supervisors when developing SGOs.

The research is showing there is an inherent value in that collaboration,” said assistant commissioner Peter Shulman. “They are setting specific targets for their specific students’ needs.” [emphasis mine]
Once again, NJDOE makes a claim about research backing up their plans without actually pointing us to the research they are citing. Certainly, it isn't anything that eminent education research Howard Wainer has ever seen:

Even if educators choose to create their own tests for SGO purposes, there are other things to worry about. Sure teachers make up tests all the time, but as noted researcher Dr. Howard Wainer explains, those tests usually have two purposes: to push the students into studying and to see if the course of future instruction needs to be adjusted.

But when you add a further purpose – the formal evaluation of the teacher and the principal – the test score must carry a much heavier load,” says Wainer, author of Uneducated Guesses—Using Evidence to Uncover Misguided Education Policies (Princeton University Press, 2011). “Even professionally developed tests cannot support this load without extensive pre-testing and revision,” something that takes a lot of time and a lot of money.

That leaves portfolios, another idea that Wainer believes “only sounds good if you say it fast.”
“When portfolios were used as part of a statewide testing program in Vermont about 15 years, ago it was a colossal failure,” he recalls. “It was unreliable, unpredictable and fantastically expensive,” and soon, state officials abandoned the program.
What is the lesson to be learned? “Some measurement methods that work acceptably well at a classroom level do not scale,” explains Wainer. “A folder of a student’s work produced for a parent-teacher conference illustrates what is going on and directs the discussion, but when the folder is reified as a ‘Portfolio Assessment,’ we have asked more from it than it can provide. Research shows that portfolios are well suited for one purpose but not the other. What would make New Jersey’s use different?” [emphasis mine]
Oh dear. Well, there has to be some research that NJDOE is relying on; what about their "AchieveNJ for Teachers" webpage?
SGOs (or Student Learning Objectives/SLOs) have been in use in many states and districts for the past 14 years. The following studies and resources offer more information about the implementation of SGOs across the country:
Let's take a look at these links -- but let's go out of order, starting with #2, published just this past spring:
Because there are not yet any research-based models for evaluating teachers’ impact on student learning for non-tested groups, states are experimenting with various approaches (Goe & Holdheide, 2011). [emphasis mine]
Yes, ladies and gentlemen, you heard that right, straight from the NJDOE itself: There is no research base that demonstrates SGOs will aid in effectively evaluating teachers in non-tested areas. This is faith-based teacher evaluation, folks; the document reiterates the point:
Research has shown that setting student learning objectives, in general, has a positive impact on learning (e.g. Beesley & Apthorp, 2010). However, because the use of SLOs in teacher evaluation is a fairly new approach, there is not yet rigorous research available on its effectiveness. Harris (2012) describes SLOs’ “potentially attractive” qualities of allowing for teaching autonomy in setting individualized objectives and customizing instruction accordingly. However, he emphasizes that these same qualities could lend themselves to manipulation and non-comparability and that “there is essentially no evidence about the validity or reliability of SLOs” (Harris, 2012, p.5). Non-experimental studies conducted on the Austin, Denver, and Charlotte programs described above seem to indicate a positive correlation between the quality of SLOs and student achievement and between the number of objectives met by teachers and student achievement, but more research is needed on the relationship between teachers meeting SLOs and student net achievement (Tyler, 2011).
The first link from NJDOE is a report about the implementation of SGOs in Charlotte-Mecklenberg. Understand the report says nothing - nothing - about the validity or reliability of using SLOs/SGOs in non-tested subjects. And the report's findings about SLOs in tested subjects looks like a candidate for the Mountain-Out-Of-A-Molehill-Inator* (p. 77):
Analysis of SLO Attainment, 2009-10 
Cross-sectional HLM analyses are conducted on the attainment of SLOs for elementary and middle school mathematics and reading. The findings are consistent with those on the quality of SLOs. 
There is a positive, statistically significant association between attainment of SLOs and student achievement in elementary school mathematics and reading. In terms of z-scores, attaining the SLO growth target increases the z-scores in elementary reading by 0.15 points, and elementary mathematics by 0.11 points.The attainment of SLOs is not a statistically significant predictor of student achievement in either middle school mathematics or reading.[emphasis mine]
Translation from geekspeak: most likely, a student who met the SGOs in elementary math and reading got a few more questions right on the state tests. Shocking, I know...

There is, admittedly, a correlation between whether a teacher had a "high-quality" SGO and whether that teacher got a bonus based on a Value-Added Model (VAM) that uses student tests scores to judge his or her "effectiveness." (p.58) But considering how crappy VAM models are, there's not really much for NJDOE to hang its hat on here. And, again: none of this is relevant to teachers in "untested" areas.

I want to be fair here: throughout the report, you get the sense that teachers and principals in C-M were willing to give SLOs/SGOs a fair shake. Everyone agrees that measuring student growth is a fundamental practice of good teaching. Everyone agrees that teachers ought to be judged, at least in part, on their ability to affect student growth. Everyone agrees that all professionals should look at the best practices of their colleagues. The teachers of North Carolina, like teachers in New Jersey and everywhere else, have no problems with being held accountable for their work.

But let's be very clear: there is very little research basis to back up the top-down implementation of SGOs in AchieveNJ. There's no reason to believe these metrics are valid enough, reliable enough, or precise enough to make high-stakes decisions. There's no reason to think using SGOs will substantially increase student achievement in tested or untested domains. And there's absolutely no reason to assign a weight of 15% to a teacher's SGO score.

Teachers and principals and administrators will do what they always do with mandates that are delivered from on-high by non-educators: they will work to make them as useful as possible for professional development, and as unobtrusive to student learning as they can.

But that doesn't change the fact that there is very little reason to believe the wide-spread implementation of SGOs will do much of anything for teachers, administrators, or students.

Accountability begins at home.

* You mean you haven't heard of the Mountain-Out-Of-A-Molehill-Inator?

ADDING: You know, I have to wonder if Charlotte-Mecklenburg's teachers are as enthusiastic about SGOs and the like now that the teaching profession has been destroyed in North Carolina...


Anonymous said...

jj, you didn't mention that in the study mentioned in the first link, the Teachers where paid a bonus for their SLOs. One would think that for $1000. some Teachers may spend many hours to meet their SLO and may skip things like History and Science!

Duke said...

Good point.

Anonymous said...

In Maryland, as part of the Race To The Top grant (enthusiastically supported by the Prince George's County Educators Assosciation,) 20% of a teacher's evaluation is to be based upon the Maryland School Assessment test starting this year.
This, despite the fact that Maryland schools are implementing a Common Core based curriculum that will not be aligned with the Spring 2014 MSA.
I asked David Volrath, the person in charge of this at the Maryland State Department of Education, for the research that was used to establish using test scores for any percentage of an evaluation.
He replied that there was none.

Laura h. Chapman said...

A 2013 review of research bearing on SLOs concluded that no studies provide evidence of the reliability or validity of this process for teacher evaluation. No studies provide evidence that the stated objectives for students, if achieved, can be attributed only to the influence of the teacher. Several studies reported problems with the extensive time needed for training evaluators and the costs of monitoring the SLO process. All studies documented unresolved issues in the validity, reliability, and fairness of SLOs for high-stakes evaluations of teachers in a wide range of subjects and job assignments (Gill, Bruch, & Booker, 2013). Gill, B., Bruch, J., & Booker, K. (2013). Using alternative student growth measures for evaluating teacher performance: What the literature says. (REL 2013–002). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Mid-Atlantic. Retrieved from http://ies.ed.gov/ncee/edlabs. More on this topic by email to chapmanlh@aol.com