Monday, July 28, 2014

SLO Descent Into Madness

Evaluating teachers is hard. Avid Slacker Guide readers have heard this one before (see also: here and here). It's tempting to say that because it is so hard, we shouldn't bother. However, the quality of a teacher has a huge impact on the quality of a student's education, which means that we can't just leave this topic alone. Moreover, weak teachers drag all teachers down by association and by way of increasingly silly policies instituted to keep bad teachers from being so bad. Overall teacher quality is simply something that everyone must be invested in.

Consequently, state boards of education rarely implement any method of education reform without also trying to shoehorn in a little teacher evaluation. For example, standardized tests (see also: here and here) aren't a bad idea on their own: let's see how things are going by putting all students through the same measurement device. The problem arises with the inevitable next step, in which we decide that test scores are a good way to evaluate teachers.

Similarly, differentiated supervision was devised as an alternative way to assess teachers. The concept, best I can work out, is that a once-a-year visit to a classroom is a poor measurement of what's happening on a daily basis, so let's do something else. The "something else" could be a book discussion group, development of common assessments (i.e. all Algebra I Honors students take the same final exam), or a research project of some sort. At this point things can get a little strange. Apparently, the thinking is that if you can conduct a proper research-based project, you have proven your effectiveness as a teacher. This is a little gem that we've taken from higher ed, even though I'm not really sure it's been proven there.

Out of Alternative Assessment® and Differentiated  Supervision® came the Student Learning Objective, or SLO® (not to be confused with SOL, or the SOL). Like so many other innovations in education, the underlying concept is sound: have teachers set a goal for their classroom (or classes--for the sorry souls who push carts around) and figure out a way to measure progress toward that goal. Of course, PDE (not to be confused with PDE--I can do this all day) takes the next logical step, which turns this good idea into a bad one. Since PDE has accepted the fact that there will never be a Keystone (okay, one more--see also: here) for certain subjects like chorus, shop, gym, or lunch, there must be a replacement measure for test scores to evaluate teachers of those subjects. So, PDE decided to link SLOs to teacher assessment.

The true genius of this plan is that teachers design their own SLOs, and the means of measuring whether achievement of this goal has been attained.

One of the most unexpected and wonderful moments of my teaching career happened just recently. I encountered my very staid principal quoting my very snarky analysis of the SLO to another teacher. Sadly, she has since retired from her position. Less sadly, though, one of the stated goals that she plans to accomplish in her retirement is to go to Harrisburg and get the Legislature to stop screwing up education. If she quotes me to a state senator, I may just declare victory and retire myself. 

My brilliant analysis (now hopelessly built up to the point that it will never measure up to your expectations) is that the SLO is a three-step process:
  1. Teachers choose a goal.
  2. Teachers devise a measurement device for this goal.
  3. Teachers assess whether the goal has been met.
It does seem a perfect system from the Slacker teacher's point of view: "Wait, you mean not only do I grade my own test, but I also make the test, and figure out the scoring system?"

Yes. Except...

PDE has figured out a way to make sure that teachers can't win, even with this goofy "grade-yourself" system. You see, teachers will be scored on a three-point system, but it has been made very clear to administrators that they are to rarely if ever award a "three." This means that the majority of teachers will achieve at best a 66.6%. In  my school that's an F, but even in normal schools it comes out at something like a D. I am absolutely sure that you will read a news story late this fall breathlessly decrying the sad fact that the average rating for Pennsylvania teachers is in the neighborhood of a D-, with only a tiny percentage even passing.

I  will admit that the previous scoring system was equally bogus. The rating system was so vague and ineffective that most administrators assigned most teachers a perfect score. Not pretty good, not well-above-passing, but a perfect score. If your assessment system yields nearly 100% of those tested with perfect scores, or nearly 100% of those tested with failing scores, you need to re-learn how to build assessments. You would think that an entity like the Pennsylvania Department of Education, which is in the business of designing high-stakes assessments like the PSSA and the Keystone, would know something about how to do this. It appears that they do not.

To be a normal Slacker Guide post this should really get another 750 words or so, but that's pretty much it. PDE has developed a way to improve effectiveness in teaching--teachers setting goals, and monitoring progress toward those goals--and then screwed it up by tying it to assessment and developing a scoring system that is no-win for just about everyone.

In other news, I did take a choral conducting class recently that could serve as a model for a better method of teacher evaluation and improvement. All it would require is nearly fifty-five hours of staff development time, an acknowledged expert master teacher, and participants willing to take what they have been doing thus far and leave it entirely at the door. In other words, something like the student teaching model applied to people like me with twenty years in the profession.

Student teaching was the last time that most of us were scrutinized on a daily basis, were subjected to a constant stream of feedback, and worked with a seasoned professional who had special expertise in our subject area. As long as we can find effective mentors for all teachers and provide the time and space for the teacher and mentor to spend considerable amounts of time together; as long as the mentor teacher has the kind of gravitas and credentials that allow the mentored teacher to feel comfortable starting pretty much from scratch with this person; and as long as this process can play out away from the classroom, where it could potentially undermine the teacher's authority; as long as we're willing to invest that kind of effort, this could really work.

Or we could track test scores and assign ratings to teachers' research projects.

No comments:

Post a Comment