The 10/10 Condundrum

This is similar to the way I talk about this issue. It’s a resolution problem. If you have 5 stars, or n/10, then your resolution isn’t really high enough to avoid giving “perfect scores”, because there are games that are better than 80% or 90%, respectively. But, if you’ve got 100 points to give out, then you’ve probably got enough resolution to represent how good the game actually is (in your opinion, of course). I’ve certainly never played a 100% game, but a few have come close, maybe 97 or 98 if you define “perfect” as “what they were trying to acheive” (as opposed to “uber game of evar”, which is probably pretty much useless as a definition of perfect since it’s so personal).

Chris

Gratuitous Math Addendum: Obviously score = [0,1] in R is the correct rating system.

Non-gratuitous PS. People who give out “better than perfect” scores should be shot on sight. You see this occasionally…first time I saw it was PC Gamer’s Unreal review; I think it got a 105% or something. I think I may have cancelled my subscription, or at least let it lapse after that.

Edit: Hmm, doesn’t look like it was PC Gamer, the website says they gave it 9.3. Anybody else remember the 105% review, or 101%, or something > 100% at the time?

Also, the board game Go would be 100% in my review system, but that’s about it.

Which is part of why this is a conundrum. Scores of all types get rationally re-thought to achieve some kind of equivalence for comparison’s sake. And so an “easy to get” 5 stars = a “not as easy to get but obtainable” 10/10 = an “impossible” 100%. So a great game that gets 5 stars, a 9.5/10 and 90%/100 on 3 different scales is potentially devalued or overvalued overall, but no one’s left really sure which. Maybe it’s all splitting hairs, but I find it interesting.

GameRankings doesn’t show any 105% reviews for Unreal…

…and here’s what happens when publishers quote gamerankings.com scores instead of actual scores.

-Vede

Well, you can hardly blame them. If you have a score that you never give out, then it’s not really part of your scale, is it?

This is why I really wish scores would just go away. They are stupid, and discussing them inevitably creates a vortex of stupid that sucks in anyone dumb enough to join in the conversation. I can feel my IQ leeching away right now. Help!

It was in a print magazine, one of the big ones at the time I think, like PCG, CGW, etc. I could have sworn it was PCG…were they ever % based?

I wonder if the online databases simply don’t allow > 100% in their entries for data integrity reasons. I’ll ask the other folks who were ranting about it at the time to see if they remember which one it was…

Chris

Any rating system is going to have a top ranking. No matter if you go with: Thumb-Up/Thumb-Down, 5 Stars, 1 to 10, or 1 to 100. They all have a “Best” rating.

To me, the Best rating doesn’t mean the game is the best ever made and it cannot be improved upon, it just means this is a really good game without any obvious errors (in the opinion of the reviewer).

-Dan Verssen
Designer of Hornet Leader PC
www.dvg.com

I’m mainly a no-scores kinda guy, but in my old age, I’ve come to love Percentage marks over /10 as they’re inherently ludicrous. The idea that you can tell the difference between 72 and 73 percent is silly as hell - but because the system is clearly a subjective one, it underlines that the mark is just a subjective way of describing how neat a game is. In fact, the more objective a scoring system, the less useful it is, because it camouflages the lie at the heart of it all.

I’m surprised this wasn’t linked to.

A load of editors talk about what a 10/10 score means and all that. PC Gamers UK and US, Edge, and others.

KG

Since when does 10/10 or 100% mean it’s perfect. Surely it just means you’ve given the game/book/CD/anythingelse your highest recommendation.

Well the % in 100% would imply perfection, because what is it a percentage of?

As Kieron says, if the 100 scale is treated as a ranking system then it suddenly becomes a ridiculous amount of granularity. Perhaps one could argue the merits of a 73 over a 72, but then what of a 43 over a 42. The distinction becomes useless. Plus the inevitable top loading and score inflation leads to the difference between 93 and 92 becoming more important as if the 100 scale were suddenly logarithmic.

In the end reviews, on a 100 scale might as well be on a rainbow color chart. It would actually be more akin to the purpose of a review score. Perhaps magazines could include color swatches for identification, but then we’d have conversion issue between print CMYK and online RGB. Maybe if we assign Pantone numbers… oops.

The percentage in 100% could be anything. It could be how much of your praise you were handing out. It could be how many people you think reading the review should buy the game. It could be how many hats the publisher sent you were full of money.

What if that “10/10” is on a decimal scale, like Gamespot’s, which effectively makes it 100% by another name? [Which they’ve given to four games.]

I’m waiting for someone to express gaming perfection as a mathematical formula using limits: “as X approaches 100%…”

That’s right, it did!

http://jaguarusf.blogspot.com/

:)

I tend to think numerical rating schemes are kind of silly but probably necessary… all I really expect from a website/magazine/person who uses them is consistency. If you never give more than 9/10, alright, I know that a 9 means “awesome sauce”. If you hand out 11/10 on a daily basis, I know you’re not very good at maths. &c.

Yeah but would 9.999… equal 100%?

It’s a percentage of the maximum score, of course.

And this gets to the heart of why it’s all so silly. Different reviewers cover different games, so expecting scores to offer some sort of consistent method of comparison between games is futile. The fact that Game A gets a 9.4 and Game B gets a 9.3 doesn’t mean that Game A is a little better than Game B, even though that’s the apparent implication. Because those scores probably come from two different sources, neither of which has likely played both games. Maybe the guy who reviewed Game A would have given Game B an 9.5. In fact, it’s possible that both reviewers would have preferred Game B to Game A, had both of them played and compared both games. You just don’t know, because they haven’t. So the idea that numerical scores gives you a precise (or even approximate) method for comparing the relative strengths of games is, at best, misleading.

Some publications have the editors take a hand in assigning scores, for the sake of consistency. This is even sillier, because the editors don’t have time to play all the games, either (otherwise they wouldn’t need to hire freelance writers to do reviews), so their score adjustments are often based on only the most superficial knowledge of the game in question.

The long and short of it is this: scores are utterly and completely subjective. At best, it’s merely the summation of how strongly one person recommends the game (or not) in numerical form. This is why 10/10 or 100% or whatever the hell top score your system has does not mean “objectively perfect game.” Because it’s just someone’s opinion. The only rational interpretation for a “perfect” score is “this reviewer’s strongest recommendation.” That’s all it is.

And a score system that never hits 10 (or 100, or whatever) is as ridiculously absurd as the amp from Spinal Tap that goes to 11.

(edited for speeling)

You should score games on a scale of one to a billion. With six decimal places of accuracy.

P.S. I give Portal 943,721,337.999999 out of 1,000,000,000.

If I was a reviewer I would throw around top scores all the time. Every time I come away from a game thinking “that was just awesome” then I would slap a 10 on it.

Oddly enough I think you get the 7-9 range so much because reviewers try to be fair. My presumption (correct me if I am wrong here) Is that often if they LOVE something they will think about other people who might not and why. Or if they dont like something they might think of different audiences who would then score accordingly. In the main I think reviewers should have a higher profile and be more personal, I prefer “Steve the Gamer says 7/10” to “X magazine says 7/10”.

I like Toms reviews for example because I know his tastes. I know if there is some timer bullshit in the game for example he will call it out. I may not like the same games he will but I know his tastes and thats helpful.

Sure the morning after you might start to nit pick but video game reviews are part purchase guides, part opinion and part critique. They are not (in the main) meant to be judgements that will stand the test of the ages, that can come later and lets be fair, when games that deserve that kind of criticism become more common.

And that is a great summation of everything that is wrong with game reviews. People who are too wishy-washy to stand by their own opinions shouldn’t be professional critics.