It is hard to imagine that there is a widely-used instrument as ill-suited to the purpose to which it has been put as student opinion surveys are to measuring teaching effectiveness. Leaving aside the statistical questions involved in drawing inferences from a single Likert item with a multivariate noncentral hypergeometric distribution, these surveys cannot provide accurate information about the skill of the instructor or the effectiveness of their teaching. They tell us, instead, about the students themselves.
We have learned that student perceptions of a course bear a tenuous, sometimes negative, correlation with teaching effectiveness. We have learned these perceptions are significantly affected by gender, racial and other biases. We have learned that they are significantly influenced by size of class, class averages, course-content, the accent, dress, or perceived age of the instructor, and even whether or not chocolate is provided prior to the survey.
Expert reports on the data include those by Richard Freishtat and Philip B. Stark submitted in the Ryerson Arbitration on this issue. (The Arbitrator’s award can be found on the CANLII website). The Ryerson Arbitrator ruled that, based on such uncontroverted evidence about the bias and non-probative value of student opinion surveys, those surveys were not to be used to measure teaching effectiveness for promotion or tenure.
UBC likewise knows full well that the instruments we use are fatally flawed as a measure of teaching effectiveness; no mere tinkering can save them. While the Association has no problem with the surveys as measures of student experience or perception, we have proposed that our student opinion surveys not be used for summative purposes. In this we join a growing number of universities that have made that determination. In addition to Ryerson, Algonquin, Rutgers, the University of Alberta, the University of Oregon and the University of Southern California are already leading the way. As Michael Quick, Provost of the University of Southern California, said, “I’m done. I can’t continue to allow a substantial portion of the faculty to be subject to this kind of bias.” (Inside Higher Ed, May 22, 2018). We know the feeling.
We do know why universities have in the past relied on such surveys: they are simple, cheap, easily assembled into single numerical scores, and provide the illusion of scientific validity. Whatever alternate tools to judge teaching effectiveness we develop, they will not be as simple or as cheap. But in a university that cares about both valid data and teaching, they need to be replaced as a summative measure of our teaching effectiveness.
UBC faculty members know from experience that students’ reported perceptions and experiences can be helpful to us as we design and redesign our courses; we are thus willing to see student surveys used for these formative purposes. We think that this is what students want most, and it is what we want as well: to make our teaching better. And as we reimagine how we might actually and accurately assess teaching effectiveness, we can also think more deeply together about what teaching effectiveness actually is and how it is fostered. Without the crutch of flawed data, we can become a leader in more accurate and more creative approaches to the measure of effective teaching. And that is what a university like ours should be doing.
Send a comment on this entry to the bargaining committee
Subscribe to the UBC Faculty Association Bargaining Updates RSS feed to ensure you receive the latest bargaining news!