Wine Judges and their Discontents


“Do you trust me?” asks the hero in my favorite scene from the Disney cartoon Aladdin. The Princess hesitates (“What?)  as if trusting anyone is a radical idea. “Yes” she finally says and holds out her hand.

Who do you trust?

I think of this scene every time I read wine reviews or wine competition results. “Do you trust me?” is the obvious question when it comes to the scores and medals that wine critics and judges award. If we do trust we are more likely to reach out our hands to make a purchase. But trust does not always come easily with a product as ironically opaque as wine.

So do we trust wine critic Aladdins and should we trust them? I raised this question a few weeks ago in my report on “The Mother of All Wine Competitions,” which is the Decanter World Wine Awards and, after surveying the issues I promised an update from the meetings of the American Association of Wine Economists in Bolzano, Italy. This is the promised report.

The session on wine judging (see details below) was very interesting in terms of the research presented, not very encouraging from a trust standpoint. Previous studies that showed that wine judges at major competitions are not very consistent in their assessments were confirmed and attempts to improve their performance have not been very successful so far, according to expert analyst Robert Hodgson. The same wine can get very different ratings from the same experienced judge. It’s hard to “trust” a gold medal despite all the effort that goes into the judging process.

The Trouble with Economists

“If you put two economists in a room, you get two opinions, unless one of them is Lord Keynes, in which case you get three opinions.” according to Winston Churchill. Hard to trust any of them when they disagree so much.

Wine critics suffer the same problem as economists, according to research by Dom and Arnie Cicchetti, who compared ratings of the 2004 Bordeaux vintage by Jancis Robinson and Robert Parker and found a considerable lack of consensus. Two famous critics produced different opinions much of the time. Hard to know what to think or who to trust. Other presentations did little to increase the audience’s confidence in wine evaluators and their judgments.

Because tastes differ, wine enthusiasts are often advised to use good old trial and error methodology to find a critic with a similar palate — and then trust that critic’s recommendations. This conventional wisdom inspired Ömer Gökçekus and Dennis Nottebaum to compare ratings by major critics with “the peoples’ palate” as represented by CellarTracker ratings. CellarTracker lists almost 2 million individual wine reviews submitted by over 150,000 members.

Point / Counter-Point

Stephen Tanzer’s ratings correlate best to the CellarTracker crowd for the sample of 120 Bordeaux 2005 wines in the research database. But, as Ömer  suggested in his presentation, it is important to remember that the data can contain a lot of noise. Clearly the CellarTracker critics are well informed — they know what Parker, Tanzer, Robinson and the rest have written about these wines and their ratings may reflect positive and negative reactions to what the big names have to say.

The researchers detected a certain “in your face, Robert Parker” attitude, for example. In cases where Parker gave a disappointing score, CellarTracker users were likely to rate it just a bit higher while giving high-scoring Parker wines lower relative ratings. CellarTracker users apparently value their independence and, at least in some cases, use their wine scores to assert it. This is an interesting effect if it holds generally, but it also introduces certain perverse biases into the data stream.

Bottom line: The research presented in Bolzano suggests that there are limits to how much we do trust and how much we should trust wine critics and judges. The power of critics to shape the world of wine may be overstated or, as Andrew Jefford notes in the current issue of Decanter, simply over-generalized. “Opinion-formers are highly significant — for a tiny segment of the wine-drinking population.” he writes. “They remain irrelevant for most drinkers.”

>>><<<

AAWE Conference Session #1B: Wine Judging / Chair: Mike Veseth, University of Puget Sound

  • Robert T. Hodgson (Fieldbrook Winery), How to improve wine judge consistency using the ABS matrix
  • Dom Cicchetti (Yale U), Arnie Cicchetti (San Anselmo), As Wine Experts Disagree, Consumers’ Taste Buds Flourish: The 2004 Bordeaux Vintage
  • Ömer Gökçekus (Seton Hall U), Dennis Nottebaum (U of Münster), The buyer’s dilemma – Whose rating should a wine drinker pay attention to?
  • Jing Cao (Southern Methodist U), Lynne Stokes Southern Methodist U), What We Can Do to Improve Wine Tasting Results?
  • Giovanni Caggiano (U of Padova) Matteo Galizzi (London School of Economics, U of Brescia), Leone Leonida (Queen Mary U of London), Who is the Expert? On the Determinants Of Quality Awards to Italian Wines

4 responses

  1. The more wine you drink, the more you are able to trust your own buds. So long as tasters stick to the standard descriptors, we can learn by trial and error which descriptors best match the profile of our own taste. It seems to me that these descriptions can become more reliable than scores, which are, of course, based on the particular taste profile of the taster, which may not be at all similar to your own.

    I think an interesting question to take on might be to consider how much should we trust tasters to pick winners in economic terms – that is, which wines are going to be a good investment. Granted, you’ve got an issue with reverse causality here – the opinion influencing the value as much as the fundamental value influencing the opinion. But can we test whether one taster does better at giving high ratings to wines that increase or hold their value better than another taster? Or would this just tell use which taster has the most influence over the price of wine?

  2. Instead of using CellarTracker scores, which are worth reviewing to be sure, we should “crowd source” a more representative cross section of wine drinkers who know little if anything about RPs scores. Such a tasting panel of say 20 people is the single best source for assessing vino.

  3. I judge a fair number of competitions, and there is no doubt that judging is incredibly inconsistent. But is that caused only by the very human failings of judges? Or is there a flaw in the competition process, which is often overlooked in the research and in reports about the research? The mega-competitons, where judges do 100 to 200 wines a day, make consistency that much more difficult.

    Steve Menke at Colorado State is convinced that the process can be made more consistent, and has worked diligently to do that at several Colorado wine competitions. One method (which, unfortunately, only works at smaller events): have the same wine judged by two panels, and not just one. This helps even out out the results.

  4. “Two famous critics produced different opinions much of the time. Hard to know what to think or who to trust.”

    No, it’s not, actually. Don’t fall into the trap of believing there is one, objective and “correct” opinion of wine. The individual tastes of those particular critics are well known, and we all know what we will get from, say, a highly rated Parker wine. It’s not at all surprising they have different opinions – both of them can be trusted, absolutely, to put forward their own, subjective, individual opinion.

    Just know your critics!

Leave a Reply

Discover more from The Wine Economist

Subscribe now to keep reading and get access to the full archive.

Continue reading