Back to the list

Do the rankings have too much influence?

AS US News & World Report becomes the latest to offer its version of the world’s top 500 universities, complete with regional variations including Europe, we turn the spotlight on the higher education rankings with guest blogger ANDREA COSTA and ask whether they have got out of hand?

THE ENTIRE global higher education system seems to have fallen into a giant trap: rankings.

To be fair, this trap is in no small part of its own making (as we will see), but still…

The mere fact that world university rankings are so numerous, with more coming out every few weeks, should make us all a little skeptical about their meaningfulness, at the very least: if measuring the overall quality of a university were easy, there would be no need for so many different attempts.

Mind you, I’m all in favour of making universities fully accountable and transparent. After all, they provide a service of paramount public interest, and most of them are at least partly funded by the taxpayer.

Pressure for openness

Yet only in relatively recent times has the pressure for more openness become strong enough for universities to, more or less willingly, ditch their image of self-serving ivory towers.

Having to compete for prospective students – and their money, either directly or through the government) – has had a major role in this cultural revolution, and this is where rankings have worked their way into universities’ boardrooms.

What university league table compilers (I mean THE, QS, ARWU and their copycats) are effectively telling us whenever they churn out a new ranking, usually to great fanfare, is: a) that the overall quality of a university is both clearly defined and measurable, b) that it does not vary that much regardless of the degree and subject you want to study, and c) that it can change between one year and the next. All of which requires, in my opinion, a good deal of faith on the part of the reader.

Astonishingly, this seems to be precisely the case.

Implicitly acknowledge the reliability

Universities big and small are all too happy to tell the world how delighted they are to see their efforts rewarded every time they climb even just one or two rungs up the league table of the day. This is understandable, but only up to a point. Because by so doing they implicitly acknowledge the reliability of rankings as indicators of the actual performance of universities. This is why I think they are buying the rope which will bind them ever tighter.

Let’s see why rankings are not what they – and universities – say they are.

The very concept of a table listing a number of elements implies that these elements are inherently similar: for instance, the football teams in your national championship.

Institutions differ widely

But higher education institutions are wildly different, and with good reason. They can be comprehensive or specialised, have an international or strictly regional orientation, large or small, public or private, and so on and so forth.

Besides, what makes a university “better” than another?

There are many answers, all of them true or at least plausible: great teachers (but this in turn is very tricky to assess), top-class research, good connections with employers, infrastructure and technology, you name it. Which of these is most important, and by how much?

What a striking contrast with our football teams: the best is the one that wins the most matches – it is as simple as that!

One-size-fits-all

The truth is that everyone has a personal opinion on what to look for in a university, but rankings have a one-size-fits-all and utterly arbitrary definition of academic quality, and on top of that they assume the information available is equally accurate from America to Zambia.

The next credibility hurdle we are asked to clear is that universities are deemed to be monoliths: whatever quality they have, it is supposed to be uniform across the board. True, this is to some extent corrected by the “subject” rankings, but this correction amounts to adjusting the weightings of some indicators while keeping the very same underlying numbers.

And what should we make of the fact that some universities jump seven or eight – sometimes more – positions from one year to the next in the same ranking?

Sudden nimble players

Do you really think that higher education providers, after centuries of lethargic evolution, have become all of a sudden nimble players capable of rapid changes? Well, no.

An often-overlooked feature of university league tables is that as you go further down the ranking, the distance between places gets smaller and smaller. Which means that even a little variation in the raw data for a certain university can have a big impact on its position, if the parameter affected has a strong weight and/or if the position is low enough.

A corollary of this is that rankings are more accurate at the top; but the top is – and will remain – a pretty closed club.

But I don’t need a professional to tell me that the likes of Harvard, Oxford and Yale are the best in the world, thank you very much.

If anything, I want to know what things are like among mere mortals, but that is precisely where rankings become “useless”, as a recent University World News feature reported in a story about a Norwegian government-commissioned study into Nordic placements.

Worrying power to influence

Probably the most worrying aspect of the story is that rankings, far from being only a snapshot of the higher education market, are rapidly gathering the power to influence the strategies of many universities.

Significant resources are being invested in order to improve or maintain a position, but only in a few cases this is publicly admitted. Consequently, an instrument claiming to promote transparency and openness ends up having exactly the opposite effect.

Some new developments in university rankings deserve some attention and may affect the future of this tool.

U-Multirank

This year the EU-sponsored ranking called U-Multirank was launched; it is interesting and unique because it shifts the focus away from the simple positioning on a table, which is indeed absent, and lets users decide what indicators are more relevant in their own view.

But it is still too early to say if this new approach is going to become a significant player, and there are a few open questions about some of its features.

Impact value

We simply do not know what value rankings have in the selection process by prospective students. This is a category with lots of different subgroups, and some of them are likely to pay greater attention to rankings than others.

However, I think it is reasonable to assume that nobody takes them entirely at face value.

Many other factors play an important role and universities should look at the whole picture.

Yes, rankings are here to stay; snubbing them is foolish, but so is giving them too much undue importance.

* Let us know what you think, either here or on our EUPRIO Facebook page

 Andrea Costa 2Andrea Costa, pictured, graduated from Bocconi University in Business Administration and then worked in the private sector for international companies (Philips, SC Johnson, Indesit) focusing on marketing research. He is now is back at Bocconi doing market research and strategic marketing as part of the Milan-based university communication and institutional affairs team.

Andrea gave a masterclass at EUPRIO 2014 in Innsbruck on how the rankings are made and took a critical look at what they were are measuring, their reliability, and how university communications needs to adapt to them.

 Words: Andrea Costa

Main photo: University of Oxford – Europe’s top university according to the new US News and World Report rankings.

Edited: Nic Mitchell