Do Advertised Wines Get Higher Ratings?

That’s the question that Boston College economist Jonathan Reuter asks in a new article in the Journal of Wine Economics titled “Does Advertising Bias Product Reviews. An Analysis of Wine Ratings.”

The Answer is No

Using data from Wine Spectator (which sells advertising) and comparing them to ratings from ad-free Wine Advocate, Reuter fails to find significant systematic bias in favor of advertised wines. In fact, a comparison of the average scores given to advertised and non-advertised wines shows a slight difference the other way, with non-advertisers receiving higher average scores.

Wine Spectator advertisers = 87.50

Wine Spectator non-advertisers = 87.58

Wine Advocate (the control in this experiment) also rated the wines that weren’t advertised in WS slightly higher than those that were, although it also gave higher average scores to all wines in the data set and the gap between the two groups was larger.

Wine Advocate scores of WS advertisers = 88.15

Wine Advocate scores of WS non-advertisers = 88.65

Reuter concludes that

Overall, the tests for biased ratings and biased awards produce little consistent evidence that Wine Spectator favors advertisers. At worst, the tests for biased ratings suggest that Wine Spectator rates wines from advertisers almost one point higher than wines from nonadvertisers. However, selective retastings can explain at most half of this bias and only within the set of U.S. wines rated by both Wine Spectator and Wine Advocate. Given Wine Spectator’s claim that it rates wines blind, the remaining difference in ratings may simply reflect consistent differences in how the two publications rate quality. The fact that tests for biased awards provide no evidence of bias suggests that there is little bias overall. Therefore, despite the fact that Wine Spectator is dependent on advertising revenue, the long-run value of producing credible reviews appears to limit bias.

Reuter’s analysis obviously goes far beyond a simple comparison of means. Click on the link in the first paragraph to read the full article.

Dog Bites Man?

Most people are likely to see this as a classic  “dog bites man” non-story — the reason we pay attention to wine ratings is because we trust that they are unbiased, so it is not particularly newsworthy to discover that our trust is well placed.

But there is theoretical interest in the question because of the Principal-Agent problem. We the principals hire wine critics (by subscribing to their publications) to be our agents in evaluating wine quality. They have an interest in honoring this contract and assigning ratings with integrity in order to keep our business. On the other hand, however, they also have an incentive to act in their own narrow interests and exchange good ratings for advertising revenues.

Theory says agents will tend to cheat on our agreement if they can keep us from finding out. Hence we are suspicious of agents even as we put our trust in them. So you can see why economists might be surprised that wine critics, at least in this study, seem to put integrity first.

Reuter concludes that a reputation for honesty is worth more than potential gains from a pay-for-points regime. It is understandable that wine critics would be  offended that their life work is reduced to a simple balance of economic interests, but that’s how economists see the world.

Splitting Hairs

I appreciate the interest in wine rating scores — economists are attracted to data sets the way television-junkies go for reality shows. Since Wine Spectator rates the most wines of any publication it is unsurprising that we want to use them in our analysis.

I have always been uncomfortable with this. The temptation is to treat wine ratings as a consistent and reliable metric to measure quality (or at least perceived quality). But I’m not sure this is really a valid analytical technique.  I don’t rate wines, but I do rate students and I know that grades are not a perfect measure of the quality of student performance. Grades (even when given blind like the WS scores) are subject to any number of distorting factors. The scale is affected by context, of course, and changes over time.

I’m not saying that critics shouldn’t rate wines any more than I would advocate getting rid of student grades. I’m just saying that we ought to be very careful when we use wine scores (or student grades) in substantive research.

I know that comparing average grades for different types of classes taught by different professors at different times is problematic, so it is understandable that I think similar problems exist for wines ratings. I always take the conclusions  of empirical economic analysis based upon wine ratings — even when it is very good — with a grain of salt. To his credit, Reuter acknowledges the empirical limitations of his study.

How is Wine Spectator Different from Goldman Sachs?

Newspapers these days are full of stories of financial industry executives who are keen to raise cash and pay back TARP funds so that they are free to pay themselves generous bonuses. Although a closer investigation might make me alter my views, it sure seems to me that the collective interests of the principals (taxpayers, corporate shareholders) are being sacrificed in order to further the particular interests of the agents.

If Goldman Sachs does it, why should we think that Wine Spectator does not?  No wonder we are surprised by evidence that they don’t. Maybe the more interesting question, from an economic theory standpoint, is why Wine Spectator is different from Goldman Sachs when both face principal-agent problems in businesses where uncertainty and asymmetric information prevail? That would make a really interesting study!

One response

Leave a Reply

Discover more from The Wine Economist

Subscribe now to keep reading and get access to the full archive.

Continue reading