Thursday, February 19, 2009

Metascores, the autopsy of a circus






Big big question for devs and publishers.

I was reading the comments in the trail of Edge's article, and noticed that four days after the article got published, not a single member pointed out the averse effect resulting of this over-dependence on Metacritic scores, as far as scores themselves were concerned.

Clue: it will only encourage the exercise which some publishers practice, of having magazines plaster their content with ads for game X or Y, which they need for increased revenues, but which in the same time skews their reviews.

Kane & Lynch: Dead Men, Jeff Gerstmann vs. Gamespot, 2007, anyone?
The tip of the iceberg.

There was an interesting bit:

Activision has made such studies. Executive VP of publishing Robin Kaminsky said at the 2008 DICE conference that higher-quality games, based on scores from Game Rankings, on average sell more, and that for every five points above 80, on average, sales double. But she noted that many games buck this trend, and that the largest publishers have found that the greatest sales growth tends to occur in games scoring in the region of 70 compared to those scoring 80 or more.

She also presented 18 products achieving scores of 90 or more in 2008 and 2007. Only two were projected to sell over seven million copies, while seven sold less than a million. Overall, 12 out of the 18 sold less than two million, a figure that marks a rough break-even point for a triple-A game. In other words, there is a correlation but quality does not assure success.‭

So, with these figures in mind, and thinking in terms of purchasing, I tried to guess what it would look like if we compared an hypothetical set of figures regarding purchase power, and what we know of sales:




I cannot stress enough on how this drawing above is nothing more than just a large guess, but there's still a logic behind that pretty thing I scribbled.
To explain the 70% crossroads, we probably reach a point where a game is:

- Not too expensive. Obviously, the more expensive, the more it might encourage piracy on itself (although it's very debatable).
- Good enough (reviews, pictures, box art, etc.). Marketing and scores fit here.
- Not too cheap so the customers have confidence in the product. It's a rule that gains pertinence the closer we get to triple-A titles in general.
- Sufficiently known. That's where advertising matters most, obviously. That's also where indie games have no chance to compete, aside some exceptional success (AudioSurf for example, although in my opinion it gets far more appraisal than it really deserves).

Are big publishers really aiming for the 70%, or do they crave for the higher numbers?
Depends of the game. I'd say you have more chances to sell crap on the Wii or web indie portals than on the 360, PS3 or high end configuration PC.
Another question is are the customers, who make it so that the market works better at 70%, really care about Metacritic and Game Rankings?

No matter how we dislike this idea, massive exposure has more chances to guaranty sales than sheer quality. This is nothing new, it's been going on since the '90s. It would appear, then, that the scores are only the concern of a few. Likewise, aren't the comments on Metacritic a little bit overrated?

So in such a context, are scores still nothing more than a hardcore phenomenon?
If so, then we know that in general, hardcore gamers tend to look for the 80% and plus, and that's also why the triple-A hardcore games, so expensive to produce they make shareholders cry at night, must reach those heavens.

Considering that the hardcore gamers know how to use internet, the maker of great or terrible reputations, it would be an understatement to understand how important is it to get closer to 90% or more, no matter the price to pay.

Perhaps... perhaps there's something missing, an itchy detail that would reveal that we're at the doorsteps of a madhouse, but we don't know it yet.
I would not surmise that publishers are going round and round through a vicious circle, like rats lost in a maze of insanity, but really, check this:

  1. Publishers pay greater attention to metascores day after day. They scrutinize Metacritic. They spend money and allocate brain "power" to this process. They want to know how this affects their sales and how far they need to go in terms of apparent quality to maximize ROI, because Metacritic is a new tool, a tool of influence of epic proportions (just like Game Rankings, owned by CNET). This is their faith and their new religion. It has to be meaningful.

  2. Craving for nice scores, publishers will do what is necessary to obtain good scores in those paper and internet reviews. Besides screwing with work teams to obtain more with less, the basic method, as quickly glimpsed above, is to hold the reviewers' cojones by having them sell ad placeholders to publishers, so the publishers can communicate on the games they want. And they can't really refuse that nice amount of money. Bills and all that.
    Considering how advertising plays a key role in magazines' revenues, nothing is surprising here, reviewers literally beg for these ads, both because information is free on internet, and the paper press is dying a slow death.
    Of course, this is where it's nice and all, because then, reviewers autoregulate themselves to pander to the publishers, in order not to infuriate them and lose that precious money.

  3. Therefore reviews are biased, but magazines and websites keep pretending being fair, honest and objective.
    They don't provide useful information, and therefore, in the end, these reviews mean exactly shit.
    And then what happens when CNET's hunters and gatherers collect all these reviews?
    They obtain metashit.
    And they believe in it.

  4. Finally, we see that the publishers spend money analyzing the very monster they feed and grow, and are very concerned about this, without really knowing which one is the puppet of the other, nor being able to explain if it's even necessary at all, since a massive marketing campaign can easily trump any score, and in fact, as a result, boost scores by indirect coercion.
    It almost sounds like a parody of a financial speculation.

The most surprising in this is that some publishers seem to consider boosting devs' royalties by allowing bonuses based on such scores.
This would seem absurd, in the light of the 70% score balance between quality and price, insuring the highest sales (people's income still has the final say) and the fact that even top games at 80-90% don't necessarily meet the expected million sales marks (generally, games with exceedingly oversized budgets and high prices).
Would publishers really dig the idea of paying devs more, for higher scores which don't even guaranty better sales, and are nearly funded by the publishers?
It almost sounds wacky.

No matter how you take this, even if EA, Acty or else were perfectly 100% honest to God on this, they can't be masochists. The very fact that they massively spread ads all over the video game review channels pretty much breaks the balance. Let's not even imagine that publishers would look for lower than 70% scores in order to pay devs less, because game sales would go down as well. It would be a nonsense. They'd rather stop granting devs those extra rewards.

As for a solution, I don't see one right now. I don't put faith in public gamers' scores, since you never know if the opinion is genuine, researched, based on a correct test of the game, simply retarded or just another "infiltration review" (a method used by Nintendo notably, where infiltrators/shrills penetrate player communities and such other places, to drop a so called independent and genuine "gamer's review" or note of appreciation, the whole art of this being not to get the cover burnt).

In fact, yes, there's a solution. Increase quality with lesser budgets. That's a fantasy, unless you start to strike for limited audiences and Stardockize yourself.
You could say "focus on sales, since this is what matters in the end," but you can't really convince players not to pay attention to metascores anymore. You can't cut it. Either you increase quality with tight management, and that can means a lot of bad things for workers in general, or you keep making sure the reviews serve your quarterly goals ($£€¥).

The real problem is not much about the score, but the idea that a good score can only be achieved with a godlike amount of money put into the production of a game. Irrevocably, the industry takes less risks, and we get bigger and louder dull experiences, and some of them fail, while some others touted as art are just about repetitive patterns in a sandbox system that cost a thousand houses to bring to life.

As quickly glossed over earlier on, as long as your game is not a bomb, nothing really beats a massive marketing campaign.

No comments: