HowÂ good are our existing measures for assessing the ‘value-added’ of differentÂ university lecturers? According to a new Canadian paper, those much-maligned student ratings do a better job of predicting value-added than faculty rank.

Professor Qualities and Student Achievement

by Florian Hoffmann, Philip Oreopoulos

This paper uses a new administrative dataset of students at a large university matched to courses and instructors to analyze the importance of teacher quality at the postsecondary level. Instructors are matched to both objective and subjective characteristics of teacher quality to estimate the impact of rank, salary, and perceived effectiveness on grade, dropout and subject interest outcomes.Â Student fixed effects, time of day and week controls, and the fact that first year students have little information about instructors when choosing courses helps minimize selection biases.Â We also estimate each instructor’s value added and the variance of these effects to determine the extent to which any teacher difference matters to short-term academic outcomes.Â The findings suggest that subjective teacher evaluations perform well in reflecting an instructor’s influence on students while objective characteristics such as rank and salary do not.Â Whether an instructor teaches full-time or part-time, does research, has tenure, or is highly paid has no influence on a college student’s grade, likelihood of dropping a course or taking more subsequent courses in the same subject.Â However, replacing one instructor with another ranked one standard deviation higher in perceived effectiveness increases average grades by 0.5 percentage points, decreases the likelihood of dropping a class by 1.3 percentage points and increases in the number of same-subject courses taken in second and third year by about 4 percent.Â The overall importance of instructor differences at the university level is smaller than that implied in earlier research at the elementary and secondary school level, but important outliers exist.

Maybe we should be paying more attention to the way ANU academics perform on Rate My Professors?

### Like this:

Like Loading...

*Related*

Many years ago when I was evaluated by students I did surveys at the end of the final exam. Students received a mark for completing the survey. There was a strong correlation between the grade the students scored in the exam and their evaluation of me. One interpretation of this is that students mark you according to how they think you are going to mark them. If this is the case then I suspect that somehow we should take this effect into account and perhaps not use the opinions of students in evaluating the effectiveness of lecturers as they are evaluating their own performance. (I did not look at the survey until after I had marked the exam to eliminate that bias:).

The other interesting point was that the one question on “Evaluate the overall performance of the lecturer” almost always correlated highly with all the other questions. In evaluations the one question plus some open ended anonymous comments will give more useful information than lengthy closed questionnaires.

Wouldn’t it be great if we had a reliable measure of value-added?

Sometimes I wonder whether asking whether or not someone’s a good teacher is too simple a question. Maybe the match between teacher and student matters too?

Looks rather hard to figure out how ANU academics perform on ratemyprofessor: there’s very little on any of you lot – only 17 ratings.

The question of value added does matter, given that higher grades/easier course do tend to be associated with a higher rating. Maybe could look at whether the students who had good (or well liked) teachers in first year got higher grades in the subsequent courses? But the results do look a fair bit like those at the secondary level, don’t they? Match quality obviously makes a difference (esp wrt hard vs easy courses/teachers and diligence of the students), but averages matter still.

It would be fun to look at how the teacher ratings from class surveys line up with ratemyprofessor ratings – casual observation suggests that the latter has a much higher variance, with smaller samples, and pretty serious self-selection into the sample. Also fun would be to see how the ratemyprof hotness rating is correlated with teacher effectiveness, and what direction the causation runs (will have to suggest that to Florian and Phil some time).

The best thing in ratemyprofessor is the comments: some of them are just hilarious, to the extent that you’d have to be flattered by even some of the nasty comments, because of the amount of time obviously spent on thinking them up.

Kevin, it’s hard to know what’s going on with the correlation between ratings and scores in your course. One could tell a story in which they were all equally smart, and therefore they graded you on how much you taught them, or one in which you taught them all the same amount, and they were just being churlish.

Don & Christine, I’ve been trying to get ANU to look at value-added, but the matching turns out to be non-trivial.

Christine, Dan Hamermesh has a paper (now in Economics of Education Review) in which he shows that better-looking professors get higher teaching ratings. The magnitudes are large, too.

Andrew: is matching at ANU likely non-trivial because of the massively hopeless admissions/student bureaucracy? Studied there for a bit in the mid-90s, and I honestly think I had to get 3 different numbers or something, from offices that were only open in the daytime. I do know of the Dan Hammermesh paper – I told one of my classes about it last year and suggested they consider their biases when filling in evaluations. (This was in a course where I’d previously discussed the returns to education/experience/good looks, etc, so it was actually relevant.)

I thought it real funny that when I looked up the Dean of our College who had not taught in two years, but who had two GLOWING posts on rate my professor in the past year. Garbage in….garbage out.

I think it is up to the readers of rate my prof to make their own decisions

the more information I have about the individual the better; students are able to assess whether an instructor is worth having or not. If a student has been failed he should be able to share his point of view; I would rather be warned than not be warned.