School scorers may not be asking enough qualitative questions
Did you get into more than one post-secondary school this year? Are you having trouble deciding which one to choose?
It’s only natural to be in a frenzy, enlisting the help of anyone with an opinion to help you decide where you should spend the next four years of your life. When it comes to useful guides, here’s one thing you can dismiss right away: rankings. Or at least the ones performed across the world stage.
Global university rankings have been around since the early 2000s, published each fall at the start of the school term. Every year, upon their release, a media circus ensues, when one prestigious institution wins the coveted title of best university in the world.
This year, Oxford University reigns supreme as number-one, granted first place by Times Higher Education, a UK-based magazine covering higher education worldwide.
But MIT (the Massachussetts Institute for Technology) is claiming the title, too, having been declared the winner by QS World University Rankings. Interestingly, QS was formerly in partnership with Times Higher Education in an organization called THE-QS until their abrupt separation in 2009.
Then there’s the Academic Ranking Of World Universities compiled by Shanghai Jiao Tong University, the first organization to publish such rankings, in 2003. This year it gave top honours to Harvard University.
So which university really holds the top spot? The deviance in results from the major ranking sources makes you wonder about their methods. Which one produces the most accurate results?
According to Michael Zryd, associate dean of graduate studies at York University, “the problem with ranking systems is the ranking system itself. It proposes that number one is better than number 10, and it ignores the qualitative differences between universities.”
A major qualitative measurement largely ignored by world university rankings is an institution’s teaching quality. Both THE and QS attempt to account for this valuable indicator by measuring student-faculty ratios and distributing discipline-specific academic reputation surveys.
Phil Baty, editor of THE World University Rankings, justifies this approach. “We’re asking experts on the ground, based on their direct experience and their subject-specific expertise, to tell us which departments at which institutions are doing great teaching.”
Zryd counters that such generic techniques have obvious shortcomings.
“The problem with reputation is that it accrues over time. It is heavily weighted toward old institutions that have had time to develop a name. It’s something that accumulates.”
The overall faculty-student ratio may give some indication of class size, Zryd says, but it overlooks independent tutorials where often massive classes get broken into smaller groups and students get individualized attention from tutorial leaders.
“You’d be better off looking at values like teaching resources,” he adds. “How well does the university train its faculty? Does it have provisions for training its graduate students, who are tutorial leaders, as well as its faculty?”
There’s no real consensus on these questions, especially across disciplines, where subject-specific ecosystems govern the quality of education received by students.
This lack of consensus is also reflected in the methods used to assess research. While global university rankings are increasingly attentive to humanities- and social-science-centred research, their preference for science-centred institutions is undeniable.
When factors like research income are included in ranking methodology, the hefty grants awarded to medical science research give an obvious advantage to STEM-based (science, technology, engineering, mathematics) universities. And although the number of citations (references to published work by a researcher’s peers) constitutes robust data on the quality of work being performed at a university, their calculation must be sensitive to the ways social science and humanities research varies in distribution and publication.
Growing concern over the legitimacy of global university rankings doesn’t mean they’ll disappear any time soon. They’re extremely useful to the elite “world-class” education centres competing for top student applicants.
Baty insists, “The data is of value to national governments and policy makers seeking to assess the strength of their position in the global knowledge economy, to university leaders setting strategy, to faculty and other staff trying to make career choices and to many of the several million students who study outside their home countries.”
“The problem is when the tail starts to wag the dog,” says Zryd.
email@example.com | @nowtoronto