Search form

About The Blogger

Steve Haberlin's picture
Steve Haberlin is an assistant professor of education at Wesleyan College in Macon, Georgia, and author of Meditation in the College Classroom: A Pedagogical Tool to Help Students De-Stress, Focus,...
Back to Blog

Considering Teacher Evaluation Models

First, I’m not theoretically against evaluating teachers or the idea of holding professionals accountable. Evaluation systems can provide teachers, new and experienced, with new perspectives and consistent feedback that helps them grow and perform at higher-levels. I have personally received feedback that dramatically changed my teaching practice. As a supervisor of undergraduates who are training to become teachers, I know the value that comes from being evaluated by others in the field. However, I am uncomfortable with evaluating teachers in inequitable, narrows ways, which fail to respect their hard work, experience, and dignity.

In recent years, school districts have embraced formal evaluation models based on work created by Marzano, Danielson, and others who have proposed criteria to determine whether teachers are being effective in the classroom. As if the pressure wasn’t enough, some districts have tied ratings on these evaluations to performance pay and bonuses.

If we are going to place so much emphasis on evaluation models, then I think it’s equally important to critically reflect on the consequences. As a former elementary school teacher, I’ve had the pleasure (and pain) of experiencing firsthand at least of one these evaluation models. I have felt the nervousness, the pit in the stomach, as I thought how to best plan a lesson under these conditions. Like so many other teachers, I have taught under the pressure of being observed using such evaluation criteria, and I experienced how data collected using an evaluation tool was utilized, for better or worse. Based on these experiences and my own research, I present to you the following concerns and food for thought.

Who Decides What Criteria Demonstrates Effective Teaching?

I want to acknowledge that evaluation model criteria is not just thrown together but normally (hopefully) based on years of research and application. For instance, in the case of the Marzano Teacher Evaluation Model, the group’s website boasts that the tool was the result of 5,000 studies over five decades and other research, including correlation analysis between teaching strategies and student achievement. An advantage of a school district using established criteria, in the form of a rubric, provides stakeholders with a common language, a shared vocabulary. For example, a principal talking with teachers can refer to common language when communicating expectations. The question, then, is whose language is it? If a school district has decided to accept a particular evaluation model, without input from the classroom teachers who are being evaluated using it, then it is not a shared language; it is an imposed language. Without any voice, a teacher is then forced to “fit” his or her teaching style, pedagogy, philosophy, resources, and environment to match this imposed criteria. I have known, for instance, very competent teachers, whose style did not align well with the adopted evaluation tool. In this case, the teacher strongly leaned toward a teacher-directed classroom while the evaluation model favored a student-led classroom. Who is to say one is better than the other? If the teacher is able to produce results (e.g., high student performance, engagement, improved test scores), should that not be the deciding factor in how a teacher teaches? Another gap I found in evaluation model criteria is considering a teacher’s individuality, creativity, and strengths. Are these not important factors in teaching?

Narrow Windows of Observation

The second concern with evaluation models involves data being collected in narrow, limited ways. To put this in perspective, a teacher might instruct 180 days a year, logging hundreds of hours in the classroom. But an evaluation model may only require the teacher to be observed two or three times a year, for a maximum of three hours. Final summative evaluations are determined from data collected during those two or three hours. Basically, the teacher is asked to pack a school year’s worth of experience, knowledge, and performance into a very small window. While observing a teacher several times during the academic year is better than once, I think teachers would be better served if evaluation decisions are based on long-term data that more accurately captures their performance. For instance, teachers might be observed several times but also allowed to complete portfolios that featured video-recorded lessons, student work, reflections, and other information that could be used by observers to make informed decisions. Due to logistic realities, observers might not be able to physically visit classrooms more than a few times a year but through technology, such Facetime, Skype, video recording, could gather additional evidence of whether teachers are meeting district expectations. Using a more holistic approach to gathering evidence, a teacher’s experience, hard work, and skills, I believe, will be more fully respected.

Again, I’m not entirely against the notion of evaluating teachers. However, we must be mindful of the impact of such models and reconsider how they are being implemented — particularly if they go against the original intent of the model’s creator. Like an evaluation model, a shovel is a tool. It can be used to bang someone over the head or it can be used to build great things for people. It’s all in the approach.