You have seen the little tags hanging from shelves in wine shops. They’re often hand-drawn, in permanent marker, and sometimes they’re bright, with a large, two-digit number and the name of a publication prominently displayed inside a jagged kind of exploding star. There might be multiple exclamation points, or several underlines, and that number will usually land somewhere in the 90s or high 80s. If the tag is not handmade, it might be an official shelf-hanger from a publication.

All of this is a product of the infamous wine-rating system that relies on the 100-point scale, and not everyone is a fan.

For some folks, those at-a-glance scores are a welcome bit of information, a tool that helps people get in and out of a wine shop quickly as they hustle to a dinner party. Later, at the party, the guest’s presentation usually goes something like this, beginning with a disclaiming shrug: “I don’t know, it said 91 points at the wine store. It should be pretty good.”

You don’t have to search far for a wine article or book that denounces wine scores for their ability to oversimplify things (among other drawbacks). I am not one of those people. I do think that relying completely on scores is lazy and unwise. I also think that the system overall can be unreliable.

But unbridled score-hating? Not for me. I actually like scores, especially when they are attached to a wine I am not familiar with. They tell me that someone has vetted the wine and offered an opinion, and if the score is high enough (depending on the source), it will spur me on to find out more about it. The wild card here is: Consider the source. Whose 92 points are those? Do they belong to a well-established, respectable wine publication or critic? Or do they belong to an employee of the wine shop who has just rated his very first wine? Gotta start somewhere.

It’s all largely subjective. Unless you are going to use a score simply as a jumping-off point then go and do your own research, relying on that number alone is a little bit of a crap shoot. Even if a score is bolstered by a description, you still don’t know if the score-giver and you smell and taste in the same way.

I do believe in some degree of objectivity in criticism — that people who really understand something intimately can identify and generally agree on quality — but I also believe in the concept of one man’s floor being another man’s ceiling. Wine scores are based on several different point systems, with such top scores as 100, 20, five and three. The methodology for those scores can be just as varied.

Years ago, I worked at a magazine, and some of us there had differing opinions about our restaurant rating system. The magazine’s long-standing tradition was to follow a sort of “for the price” philosophy. The ratings were concerned less with how a restaurant stood up against all others and more concerned with how well a restaurant did what it did.

Like: “This is a five-star diner (or a four-star hot dog stand) because it is the best in its class.” My suggestion was to judge all restaurants the same way. In that system, if the finest restaurant in town had earned five stars and a BBQ joint had earned two stars, readers could have assumed that the BBQ joint was pretty darn good — or, at the very least, a great value. Presumably, it would have offered very good food at a fraction of the cost that fine-dining restaurant would charge.

Similarly, when I was a teenager, I often consulted a two-point system for movies. Film critics Gene Siskel and Robert Ebert gave a thumb’s up or thumb’s down, and even a single thumb’s up with a compelling endorsement was enough for me to consider a movie further. Over time, I was able to calibrate my tastes to Siskel’s and Ebert’s, and simple words or phrases they used, coupled with the directions of their thumbs, told me everything I needed to know.

Say a wine gets a score of 88 and costs $14. Is that better than a wine that gets a 92 and costs $60? Could be, depending on what that 88 and 92 mean (rating-system point values can differ), and whether or not the nose and mouth behind the scores jibe with yours. The bottom line is, wine scores alone are helpful only if you can read into them — if you know the source well and if you have decided, through trial and error, how that source matches up with your own preferences.

For wines, consider how much you and the source of the points (publication or person) smell, taste and think in the same way. Also find out what kind of methodology the source is using to arrive at the numbers — and hopefully the numbers will never be all you consider.

Another complaint against scores is that they dumb down the world’s wine supply. High scores for bigger, bolder, more powerful wines have led some winemakers to copy that style, so that they might also receive high scores and, thus, sell more wine. The argument is that the world does not need more big and bold wines — just great wines that reflect where they come from, regardless of their style, regardless of the scores they receive from a few influential rating systems.

How scores affect wine production is out of our hands (and a completely different discussion), but how scores affect us is within our control. Do a little homework on the wines you are considering purchasing, of course, but even before that, try to figure out how the point systems you use most often consult align with or diverge from your own thoughts and tastes.