Evaluation of Teaching: But What Kind? Pathways Spring 2020, 32(3)

 In Wild Pedagogy

WILD PEDAGOGIES STORIES

Evaluation of Teaching: But What Kind?  

By Bob Henderson     

 

Early in my career as an outdoor educator at a Canadian university, I ran into a conflict with my department superiors concerning philosophies and practices with significant implications to teaching evaluations. It was the mid-1980s. While the conflict was unfortunate, the lesson was a good one.  I learned, early on, that the ways of one’s field of teaching need to come before the conventional ways of the institution, most often. This lesson helped shape my career at the university. 

 

I learned another lesson too: focus on the student experience, not one’s collegial opinions. Such is the topic of this discussion here: teacher evaluations. 

 

My department had a five question Leikart scale (scale of one to five, answers ranging from strongly agree to strongly disagree) evaluation of teaching form. This was common practice in the 80s and 90s. Here are the questions by which all teachers were to be evaluated — from outdoor experiential-based educators to dance to exercise physiology professors:

 

  1. The instructor demonstrated mastery of the subject material in this course.
  2. The teaching methods used by the instructor were clear enough for me to follow them and learn from them.
  3. The instructor succeeded in stimulating my interest in this subject.
  4. The instructor was well organized and followed an explicitly detailed course outline.
  5. The instructor’s responses to essays, tests, performance, etc, demonstrated concern for my learning.

 

Following the numerical response (1-5), each student could provide specific comments beyond the open-ended final opportunity to comment more generally. 

 

The course in question — the one that inspired the conflict — involved an eight-day field trip (canoe trips in groups of nine), followed by student self-selected group projects/presentations and individual work/presentations, topics being determined in agreement with the professor (me), usually with one-on-one meetings. Class size was usually 40 students. 

 

I “taught” largely out of class time with meetings to advance students’ ideas. The post trip component of the course involved a few lectures/discussions and an urban field trip in the early goings. This all set the stage for the student-directed work and presentations in a three hour time slot once a week. 

 

Students post-trip all know each other, which allows for an esprit de corps that is rare in university class rooms. It would feel, dare I say, criminal, to follow up a longish group field trip with a standard university transmission-based professor-to-student lecture. If the course has a group field component as its centrepoint, then the in-class component and evaluation needs to follow suit in-kind and reflect experiential/inquiry-based modes of teaching and evaluation: hence, the group work and individual projects.

 

Students even had a hand in their own project evaluations and in group projects evaluations. It was different for me and students.  Some students resisted the unconventionality. Others (most), embraced it. Yes, it was all unconventional and frowned-upon by conventional-driven colleagues, but this method was then, and, of course, remains a more “traditional” way of learning over the centuries. But that’s another topic. Back to the evaluation forms.

 

I quickly got used to scoring modestly, or even poorly, on the numbers aspect of the teacher evaluation questions, particularly #2 and #4. I assumed by third and fourth year, students were overly familiarized with the conventions of university schooling with teacher-centred lecturing, some discussion, and multiple choice evaluations with large class sizes (50-300 students). The experiential grounded teaching methods and evaluation were daunting for some and totally embraced by others, or, I really wasn’t “clear enough” or “well organized.”

 

However, generally, the written responses to the five questions were glowing. For example, for question #2, a common response went something like this: “Yeah, the teaching method was so different as to be unclear for us at times, BUT what a pleasure to have so much responsibility for what we learn, how we learn it, and how it is evaluated. Refreshing!” (Score: 2 out of 5). 

 

A common response to the call for any other comments at the end was: “Best course I’ve taken.” Gratefully, the written comments and the numerical evaluation were both considered in terms of merit. But what if that were to change? (And it did with a later evaluation form.) What if only the numerical aspect of the evaluation was taken into consideration for merit and promotional purposes? I’d be scuppered!  I hung my career hat more on teaching than research.

 

Taking all this into consideration, the experiential modality worked against the grain of common university practice but it was very successful in terms of student experience. I learned to focus on students, not the opinion of my colleagues. None of this should seem unusual to an outdoor educator in an institutional school setting. 

 

One day, it came to me that I should play a bit with the Leikart scale evaluation and even conduct a little research project. And so, I fashioned another set of questions that addressed the same inquiry points of practice;  but from a standpoint that makes sense for experiential methods. The additional form was made as identical as possible to the standard required form. Here it are the questions:

 

  1. The course allowed me to develop some mastery of the subject.
  2. The teaching methods we employed in the course allowed me to pursue a relevant and valuable learning experience.
  3. The course was challenging and engaging for me. 
  4. The course unfolded in a meaningful and relevant way. 
  5. The instructor communicated concern and care for my learning during the course.

 

 

Students were given both “course” evaluation forms with as little fanfare as possible. They could choose which form to fill in first. I am certain that for some, the exercise itself was a useful pedagogical experience, even an awakening to what principles underlie the course in question. What I hoped the students would see, even as a minor epiphany, was that one form was teacher-centred and the other was by necessity and the logic of the experience, a “course” evaluation. 

 

I planned to administer the two evaluation forms for five years and then compare the numbers. I expected the written comments to not change much. It was all about the numbers. Sadly, after two years, the department changed the form entirely to a computer generated form system so I dropped the research project idea. The results of the two years were enough to prove my point, if only for myself. 

 

The student-centred (even nature-centred) course-specific generated questions of my own design to suit the outdoor education course were (generally) answered with a four or five rating for all five questions. Also, it was easier to be praise-worthy with written comments because the questions more directly reflected the course methods. I still got a two, three, or four for the departmental form with comments that had the common “but” clause, which showcased the fact that the department form did not relate to the teacher/course it was meant to evaluate. 

 

I pointed this out to the appropriate colleagues who oversaw teacher evaluations and related merit and promotion, but it was clear, this was awkward terrain that was not to be entered into. One evaluation form was fine. We all teach the same. That was the bottom-line. To be fair, this story is set in the late 1980s. It might be different now, though I suspect there is much “sand in the machinery” for the experimental-based educator who is attempting to match field trips and classroom pedagogy with an institutional education setting. 

 

The lessons of this 1980s evaluation forms story remain relevant today for a couple of reasons. First, because the teaching methods of outdoor experiential learning might not be in synch with institutional criteria (e.g. evaluation forms)  and this will likely demand attention and, I might add, more work for the educator. Second, it is best to primarily focus on the quality of the student experience and deal with the fallout of collegial angst or institutional negativity as a secondary concern.

 

Bob Henderson works on a variety of outdoor education projects and in a variety of settings. He is a Resource Editor for Pathways.

 

Endnote:

 

For a full treatment of a transmission curriculum and its differences from a transformational curriculum central to the two evaluations forms story, see  J.P. Miller, The Holistic Curriculum (Toronto: OISE Press, 1988) 4-7.

Recent Posts