Gerry:

 

My experience has been somewhat different. First, most, if not all, students who do quantitative part also provide comments, many are quite *specific*. I find them helpful and useful because in many cases they help to account for low/medium or high quantitative ratings. They also demonstrate the Symbolic Interactionist principle regarding

differences in perception, interpretation and definitions of the situation.  For example, some students loved my humor and also the way I related current events and personal experiences to sociological concepts, theories, etc. Others preferred that I just “read the book” to them. I could go on with other examples but won’t They all indicated that students have differing “tastes’ regarding teaching style and also expectations.

 

I take the qualitative evals. Seriously and use some of the comments to change the course.

 

Michael

 


From: Gerry Grzyb [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 24, 2006 1:53 PM
To: Michael Klausner; [email protected]
Subject: [BULK] Re: TEACHSOC: Re: ESPECIALLY FOR JAY AND KATHLEEN..
Importance: Low

 

At 09:49 AM 1/24/2006, Michael Klausner wrote:

Gerry,
 
They receive BOTH forms at the same time. Most fill in *both* forms. BTW, I have heard the same from many of my colleagues. After I read comments on my Qualitative form, I was expecting many 4’s and 5’s on the quantitative ratings but not so.  I scored just above the “school mean” on most of the questions. Since promotion, merit raises, etc. are heavily determined by student evaluations I’m thinking of sending the qualitative responses to the Dean and Chair. However, they give them much less weight than the numbers.
 
Could it be that students are “numbed” somewhat after doing so many quantitative  forms that there is an “automatic “ averaging effect?
 


Well, I'll share a few things based on my own experiences as a professor and as member and chair of the college-level tenure and renewal committee here for most of the 90's.  I agree with much of what Kathleen said, and I don't want to repeat it.

Until recently, we used opscan all-university student opinion forms (depts and indivs could add their own questions, but few did), and students had opportunity to write comments at the same time.  The most notable thing about the comments were how relatively few were received, and how non-specific they tended to be ("X is the greatest teacher on campus!!!").  Still, as a rule, the comments tended to be in line with the numbers from the opscan questions, at least in general positive/negative terms.  And they sometimes--SOMETIMES--gave us clues as to why a particular professor was rated very high or very low, although they were totally useless in explaining any variation within the vast numbers of professors who were neither very high nor very low.  They had some limited use, then, in helping a poorly-rated person figure out what was going wrong, and helping peers to know with some confidence whether a professor was getting very high scores for the right reasons (e.g., we found one professor with very high ratings had little content in his classes and gave almost all As and Bs, but was actually a superior "life coach" who helped a lot of students get their act together instead of dropping out--but you tell me if that's high scores for the right reasons!).  On the tenure committee, we relied more heavily on the quantitative data, but only to the extent that we wanted to know if a person was in the bottom or top decile or so.  We were very aware that all sorts of things account for variation in the vast middle, but that if someone scored REALLY low there really was a problem, and if someone scored really high, they probably were doing something right.   Bottom line: both qualitative and quantitative data were treated with a fair amount of suspicion.  Unfortunately, that fit with a growing emphasis on publication and a de-emphasis of teaching, and the sad result seemed to be that nobody was motivated to get out of students evaluations what was potentially there to be gotten.

The de-emphasis of student evaluations received an unintended kick when a parent at another UW campus insisted --using the open records law -- on seeing all of the written comments that professors had received from students via the student evaluation surveys.  The university would be subject to libel charges if it released anything libelous, so all of the comments had to be scrutinized prior to release, and given the official CIA black marker treatment.  Want to guess how many hundreds of hours of labor were involved in that review?  I'm not sure if all campuses did this, but ours decided "screw it--we'll never collect such comments again."
Individual professors can solicit comments if they collect them themselves, but the university itself will not solicit them, collect them, analyze them, touch them, or in any way be involved with them.  I've since found that if we received few comments when the university WAS collecting them, we receive even fewer now (the last time I openly begged for them in a class of 40, I got 2).  It is a rare student who believes student evaluations are taken seriously by anyone, including students (our student government had actually did its own surveys a few years ago, and students generally ignored those results as well).

What we're left with here is a short set of opscan questions, which at least have the value of taking less class time of the former set, nearly 4 times larger.  But what is missing is any kind of campus-wide commitment to solicit student opinion in a meaningful way and to use it appropriately and with due regard for its strengths and weaknesses as a measure of performance.  


Dr. Gerry Grzyb, Chair
Department of Sociology
University of Wisconsin-Oshkosh
Oshkosh, WI  54901

Office: Swart 317A

920-424-2040 (Personal office)
920-424-2030 (Sociology office)
920-424-1418 (Sociology fax)

e-mail: [EMAIL PROTECTED]




 

Reply via email to