Course evaluation, what is it good for?

By Matthijs Krooi

A few months ago, this blog featured an excellent post about bias in teaching evaluations, especially with regard to age and gender. It is a sobering story about a practice of performance measurement that is very common at universities, and which tends to be important for individual careers. I can still remember how my self-confidence was affected by both positive and negative teaching evaluations, when I was a recently graduated tutor in the BA Arts & Culture, without truly understanding what the results meant or what I could do with them.

Now, almost ten years later, I’m still trying to make sense of student evaluations, but this time in the context of education policy. Not only do these evaluations impact individual teachers, they also shape our collective understanding of education quality. Most people will agree that it is good to have a feedback mechanism, and that student surveys are a low-cost, efficient way of organising feedback. Yet, we also encounter problems that make us question the robustness of course evaluations, like biases, low response rates, and contradicting feedback.

On top of the abovementioned methodological issues, the standard practice of course evaluation also has a validity problem. We live in an era in which many in academia are used to treating the concepts of ‘the quality of education’ and ‘student evaluations’ as synonyms. At the same time, this goes against the daily experience of most teachers that learning is a complex process that cannot be reduced to the average of student opinions generated at the end of a course. For instance, some learning experiences take a while to sink in. Who hasn’t had the experience that you only come to appreciate new things about what you learned during your years as a student after graduating? It is understandable that many academics have developed a degree of scepticism with regard to student evaluation surveys or quality assurance in general. While surveys can provide valuable insights, we should recognise their limitations.

So, how can we think about course evaluation in a more meaningful manner?

First, in line with a broader wave of research on quality assurance of teaching and learning, we should be aware that education quality is a complicated, multidimensional construct that can mean different things to different people. Student satisfaction can be one relevant dimension, but, in the words of Dolmans and colleagues, it is crucial to remember that “no single measurement should be used when evaluating courses or teachers”. Not only does a single metric say little about the broader notion of education quality, it also invites unwanted ‘gaming’ behaviour that potentially undermines education quality. For instance, many students will probably like it when you replace tutorials with mini lectures that are focused heavily on the exact content of the exam. However, that is in direct opposition to our active and collaborative, problem-based learning approach. A broader understanding of quality includes other dimensions as well, such as qualitative input from tutors, colleagues, programme committees and educational experts, and can include alternative measurements (for instance, focus groups or peer observation).

Secondly, these different dimensions or inputs should be documented, because they otherwise remain invisible: only what is seen can valued. For all their limitations of student evaluation surveys, at least their output is highly explicit and transparent. To improve the status of qualitative input, the more implicit elements of quality also need to be made sufficiently explicit.

Thirdly, we can shift our collective focus from accountability to improvement. The input, whether it is student feedback or otherwise, is not an end in itself, but a means to help us think about improving our education. Accountability approaches tend to demotivate teaching staff, while a focus on improvement allows teachers the freedom to interpret the information at their disposal, and turn it into meaningful action. Of course, this requires an organisational culture that values learning opportunitiesas much as successes. Recognising these learning opportunities is not easy, but worthwhile.

In short, instead of relying on one single metric, we can choose to see education quality as something that we can fit to our needs and context: quality can be as much outcome as process. While it may seem attractive to focus on things that are easily measured, we must recognise that it can also be about inspiration, innovation, team learning, and dialogue.

If I could go back in time to give my younger self some advice on what to make of these student evaluations, I would tell him not to worry, because a single ‘bad’ evaluation does not necessarily make me a bad tutor, but it does make me someone who has something to learn. While it is not always fun, I’m convinced that there’s no better place to learn something than at a university, even when you are not a student anymore.

About the author

Matthijs Krooi has experienced FASoS in different capacities: as a student, a tutor and a policy advisor. Currently, he works as policy advisor at Maastricht University Office (Academic Affairs), specialised in quality assurance of teaching and learning. Next to his regular job, Matthijs conducts research into the ways the concept ‘the quality of education’ is translated into institutional quality assurance systems and practices. This blog is written in his personal capacity. Follow Matthijs on Twitter.