“Do you trust me?” asks the hero in my favorite scene from the Disney cartoon Aladdin. The Princess hesitates (“What?) as if trusting anyone is a radical idea. “Yes” she finally says and holds out her hand.
Who do you trust?
I think of this scene every time I read wine reviews or wine competition results. “Do you trust me?” is the obvious question when it comes to the scores and medals that wine critics and judges award. If we do trust we are more likely to reach out our hands to make a purchase. But trust does not always come easily with a product as ironically opaque as wine.
So do we trust wine critic Aladdins and should we trust them? I raised this question a few weeks ago in my report on “The Mother of All Wine Competitions,” which is the Decanter World Wine Awards and, after surveying the issues I promised an update from the meetings of the American Association of Wine Economists in Bolzano, Italy. This is the promised report.
The session on wine judging (see details below) was very interesting in terms of the research presented, not very encouraging from a trust standpoint. Previous studies that showed that wine judges at major competitions are not very consistent in their assessments were confirmed and attempts to improve their performance have not been very successful so far, according to expert analyst Robert Hodgson. The same wine can get very different ratings from the same experienced judge. It’s hard to “trust” a gold medal despite all the effort that goes into the judging process.
The Trouble with Economists
“If you put two economists in a room, you get two opinions, unless one of them is Lord Keynes, in which case you get three opinions.” according to Winston Churchill. Hard to trust any of them when they disagree so much.
Wine critics suffer the same problem as economists, according to research by Dom and Arnie Cicchetti, who compared ratings of the 2004 Bordeaux vintage by Jancis Robinson and Robert Parker and found a considerable lack of consensus. Two famous critics produced different opinions much of the time. Hard to know what to think or who to trust. Other presentations did little to increase the audience’s confidence in wine evaluators and their judgments.
Because tastes differ, wine enthusiasts are often advised to use good old trial and error methodology to find a critic with a similar palate — and then trust that critic’s recommendations. This conventional wisdom inspired Ömer Gökçekus and Dennis Nottebaum to compare ratings by major critics with “the peoples’ palate” as represented by CellarTracker ratings. CellarTracker lists almost 2 million individual wine reviews submitted by over 150,000 members.
Point / Counter-Point
Stephen Tanzer’s ratings correlate best to the CellarTracker crowd for the sample of 120 Bordeaux 2005 wines in the research database. But, as Ömer suggested in his presentation, it is important to remember that the data can contain a lot of noise. Clearly the CellarTracker critics are well informed — they know what Parker, Tanzer, Robinson and the rest have written about these wines and their ratings may reflect positive and negative reactions to what the big names have to say.
The researchers detected a certain “in your face, Robert Parker” attitude, for example. In cases where Parker gave a disappointing score, CellarTracker users were likely to rate it just a bit higher while giving high-scoring Parker wines lower relative ratings. CellarTracker users apparently value their independence and, at least in some cases, use their wine scores to assert it. This is an interesting effect if it holds generally, but it also introduces certain perverse biases into the data stream.
Bottom line: The research presented in Bolzano suggests that there are limits to how much we do trust and how much we should trust wine critics and judges. The power of critics to shape the world of wine may be overstated or, as Andrew Jefford notes in the current issue of Decanter, simply over-generalized. “Opinion-formers are highly significant — for a tiny segment of the wine-drinking population.” he writes. “They remain irrelevant for most drinkers.”
AAWE Conference Session #1B: Wine Judging / Chair: Mike Veseth, University of Puget Sound
- Robert T. Hodgson (Fieldbrook Winery), How to improve wine judge consistency using the ABS matrix
- Dom Cicchetti (Yale U), Arnie Cicchetti (San Anselmo), As Wine Experts Disagree, Consumers’ Taste Buds Flourish: The 2004 Bordeaux Vintage
- Ömer Gökçekus (Seton Hall U), Dennis Nottebaum (U of Münster), The buyer’s dilemma – Whose rating should a wine drinker pay attention to?
- Jing Cao (Southern Methodist U), Lynne Stokes Southern Methodist U), What We Can Do to Improve Wine Tasting Results?
- Giovanni Caggiano (U of Padova) Matteo Galizzi (London School of Economics, U of Brescia), Leone Leonida (Queen Mary U of London), Who is the Expert? On the Determinants Of Quality Awards to Italian Wines