The folly of game scores

· by Steve · Read in about 4 min · (709 Words)

I’ve believed for a while that the process of reviewing games (or indeed, most things) by allocating them a numerical score is akin to trying to pin a rosette to a charging rhino; an exercise in utter futility. The very act of publishing an absolute number to a review is a total fallacy - that you can legitimately assign a piece of pseudo-empirical data, which will be processed as such downstream, to what is in fact an entirely subjective opinion.

If a game gets a 7 or better it’s generally ‘good’, and if it gets a 9 or 10 it’s generally considered ‘fecking awesome’. But it’s entirely based on what the reviewer tends to like - someone who dislikes JRPGs isn’t going to rate a JRPG game as highly as someone that does, regardless of the inherent quality. Now, most reputable review sources will match their games to reviewers who are knowledgeable in the genre, which usually means that they also like that genre - otherwise they wouldn’t have played enough games in it to be considered authorative. If this was achieved perfectly, each review would then be an assessment of pure quality and not of the genre itself, because each reviewer would be equally enthusiastic of the genre in question. Fine in theory, except that humans, even those highly trained game reviewers, are imprecise measuring devices at the best of times.

So, in practice it often entirely falls apart. Take the recent review of MGS4 by Eurogamer, which has been raising some hackles among fans because they ‘only’ gave the game an 8. If you just skipped to the number, you might be forgiven for thinking that this denotes disappointment (everyone seems to assume a headline game has to get a 10), and a number of people have posted in the comments along the lines of “I was going to get this, but now it’s got an 8, I won’t”. And yet, if you read the actual review, if you’re a MGS fan this game is probably going to be exactly what you wanted - by the sounds of it it’s going to be a superbly realised sequel consistent with the deep tradition of the series. The point the reviewer makes, and presumably one of the reasons for the score, is that this cuts both ways - that if you didn’t like MGS to begin with, you’re sure as hell not going to like this one either. It’s a Marmite thing. So how do you assign a single number that represents the fact that a probably equal number of people are going to adore and despise it? Simply put, you can’t. Any number you assign is going to be wrong.

The only ‘right’ review is in the text. A good reviewer will explain why he/she likes or dislikes certain elements of the game, and if they’re subject or genre-relative, the reader is able to decide whether that applies to them or not. I read the review and knew for sure that MGS4 isn’t for me, and I’m sure MGS fans will read the review and get the exact opposite message. That’s perfect! A single score just disseminates false information, boiling away delicious reasoned argument and analysis into a nasty primieval sludge of a number at the bottom. I also think Eurogamer’s Rock Band review is accurate in the text but not the score for the same reason - personally the downsides they mention don’t dampen my personal enthusiasm for the game one jot, and as such the 8 they gave that for me personally is probably not representative either, even though for some people (perhaps those less into music games and/or more price sensitive) a lower score might be appropriate. It’s subjective!

The final nail in the coffin is that scores invite comparison. According to the average game scores, Mario Kart DS is a ‘better’ game than Puzzle Quest, something certainly not borne out by my personal play times on these 2 titles, but in any case these games are so different as to be completely incomparable. It’s like saying lemon is better than chocolate. It’s nonsense. Gibberish.

I realise that simple scores are perfect for the lazy, sound-byte loving MTV generation. But it doesn’t make them right.