How to interpret my scores at Wine Enthusiast

My friend and colleague Paul Zitarelli at Full Pull Wines at one point suggested having to apply a ‘Sullivan curve’ to my scores. In doing so, he noted the limited number of wines I have given ratings in the upper ranges at Wine Enthusiast. Others have also noted that my scores seem lower than those at some other publications.

While I believe my scores and tasting notes speak for themselves, I do believe there is value in putting them in context of reviews from others. In that regard, I actually believe a reverse curve should be applied to scores from many other publications for reasons I will detail.

Personally, I don’t believe I am a tough rater of wines. I believe I am, or certainly try to be, a fair rater of wines. I try to rate wines on a qualitative basis relative to their peers from the same variety and from the same area, also keeping in mind world wine quality (read more about how I evaluate wines here).

In terms of ratings, for me, wines in the ‘Excellent’ category for Wine Enthusiast – 90 to 93 - are very high scores, with the majority of wines scoring below this. Wines in what Wine Enthusiast refers to as the ‘Superb’ (94 to 97) and ‘Classic’ (98 to 100) categories are quite rare in my experience, in Washington or elsewhere. They do exist, but they are typically very few and very far between. When scores get inflated - and I personally believe they have - each individual score becomes increasingly meaningless.

How do my reviews and scores compare with those of other reviewers? That is for others to see and to say. The scores are obviously out there for comparison. I will note that few publications review wines blind and in a standardized setting, such as Wine Enthusiast does. The other publication of note that does is Wine Spectator.

Most other publications taste and review wines in a variety of settings - at wineries, at home, and at mass tastings - and sometimes a combination of all three. The latter includes prominent publications, such as The Wine Advocate and Vinous.

To me, reviewing wines non-blind introduces the possibility for bias. I have confirmed this by comparing my scores I have given to wines when visiting a winery (these notes are strictly informational) to scores for the same wines sampled blind and in a standardized setting (these are the scores that are ultimately published). When I have done so, I have noticed that wines sampled blind and in a standardized setting typically score one to two points lower than they did when scoring them at the winery, though I have seen as much as four points lower. It is extremely rare in my experience to rate a wine higher blind and in a standardized setting than I have when visiting a winery and taking notes.

Intellectually, this makes sense. Producers are excited about their wines and, when you meet with them to taste their wines, they are trying to get the you, the critic, excited about them as well. And guess what? It works. This means that wines tasted with the producer - or in fact even a producer who is a better pitchman for their wines - have an inherent advantage over wines that are not tasted with the producer or where the producer does a less effective job at 'selling' the wine to the reviewer.

There is an additional issue I take with the way most publications review wines. Tasting a wine at home, at a winery with the winemaker present (and perhaps others), or in mass tastings are radically different approaches and subsequently will produce different scores, even for the exact same wines.

Many consumers have experienced a version of this themselves, where the wine that they loved while visiting a winery on vacation doesn't taste quite as good when they get back home. This is a straightforward example of the bias that can occur being at a winery in a non-standardized setting. A similar thing can happen with reviewers, although for different reasons, with the reviewer potentially enchanted by the winemaker's pitch rather than a consumer being enchanted by being on vacation in wine country.

To me, reviewing wines in varied setting means that those scores can't even be reliably compared to each other let alone to scores from other publications.

It is my strong belief that publications that do not taste in a standardized setting should be more transparent about the setting in which the wine was tasted. Consumers should know if a wine was tasted at a winery with the winemaker, alongside 100 other wines, or at home in (potentially) a more controlled setting. They should also know whether the wine was tasted blind or not. At many publications, this information is currently opaque to consumers.

When I reviewed wines for Washington Wine Report, prior to becoming a contributing editor at Wine Enthusiast, I listed in my tasting note database whether the wine was sampled at home or at the winery and, if it was tasted at home, whether the wine was provided as a sample or was purchased. I did so because I believed that those things could potentially make a difference in the subsequent score and so should be there for all to see. I find it strange that major publications fall well short of the bar of what I did reviewing wines on a blog.

Given all this, the most direct comparison to my scores at Wine Enthusiast are to other publications that review wines blind, in a standardized setting, and without any screening panel. I add mention of a screening panel because, if the wines are screened - which some do - one already knows that they have achieved a certain level of quality in someone else's mind when tasting the wine which, again, can potentially lead to bias (read a further discussion of screening wines here). As I noted detailing how I review wines, I taste every wine that is sent to me.

When comparing my scores to those from other publications tasting blind and in a standardized setting, the differences you see are differences in critic palate. Comparing my scores to those from others reviewing wines in a non-blind, non-standardized setting, you are potentially seeing bias from producer, vintage, and appellation knowledge as well as bias from the setting in which the wine was tasted in (at home, at a winery, at a mass tasting) as well as differences in reviewer. This makes comparing wines that were scored in a blind, controlled environment apples to oranges to wines scored in an uncontrolled, non-blind environment.

Why do reviewers at some publications score wines in multiple different types of settings? For one simple reason: it cuts down on the amount of work they have to do.

At present, all of the wines that I taste when visiting wineries I have to subsequently have shipped to me, unpack, and retaste at home. This results in a considerable duplication of effort that many prefer to avoid.

Additionally, it requires considerably less effort on the reviewer's part to have someone else collect and process hundreds of wines that the reviewer subsequently tastes and reviews over a few days in mass tastings than to have the reviewer responsible for this process and tasting the wines at a slower pace. However, to me, it is more than worth the additional effort to help remove bias and ensure consistency across scores. The wineries and wines deserve it; consumers deserve it.

Is blind tasting perfect? Not by any means. There are producers in Washington where I have tasted every vintage they have ever produced. I have tasted the wines on release, and I have tasted them again with time in bottle and know how they evolve. All of that information is out the window when tasting wines blind.

Additionally, tasting wines non-blind means that you can let a bottle open up over days and see how it evolves, which has the potential to impact the score, though in my experience, most - but not all - scores remain static tasting a wine over multiple days. You can also decant a bottle of wine if experience or tasting indicates the wine merits it. This is not practical to do tasting blind, as the intention is to taste and treat all wines equally and in a standardized environment. However, to me, the potential benefits of tasting non-blind are greatly outweighed by the potential biases that can (and do) occur.

Which approach to tasting and reviewing wine is superior - reviewing in a blind, standardized environment or reviewing in non-blind, varied settings? I obviously have my own biases, but that is ultimately for you to decide and respond to accordingly. But at the very least, consumers - and also retailers who promote scores - should demand that reviewers are more transparent about the conditions that they taste and review wines under. The validity of the wine ratings themselves can depend upon it.

Instagram