Online ratings and other user feedback aren't that useful
A view from Helen Edwards

Online ratings and other user feedback aren't that useful

Online user ratings and the extent to which consumers trust them as indicators of objective quality aren't as closely related as marketers may think.

Feedback. Who needs it? Or, more precisely, who needs it in its modern form, delivered digitally, shared globally, ossified for all time and crystallised by coarse star ratings?

The more informal, diffuse and indirect kind has always been there. It’s called the marketplace. If they’re queuing in the rain outside the establishment next door while your place is empty, that’s feedback. If volume share starts slipping, you know you have to improve at least one aspect of the product/price mix.

The market itself was a language and behind it were the literal voices of people talking to one another, occasionally sharing good or bad experiences of products and services they’d just tried. Word of mouth was the invisible, unmonitored network of millions of tiny interactions – the nervous system that modulated the market’s expressiveness. It could be perceived but it couldn’t be chased.

There is no way to know if reviews and ratings are motivated by commercial interest. And no way of assessing whether a post would ever be taken seriously if you could just get a glimpse of the person behind it.

Today’s marketers actively chase something they call eWOM, seeking to keep it ever-more positive. But the resonance in nomenclature is misleading, since the differences between this and classical word of mouth – let’s call it cWOM – are legion and lead to opposite outcomes.

With cWOM, the dialogue was face-to-face. The recipient, therefore, had a means of assessing its validity: they knew if the person had a vested interest or whether their pronouncements were generally best taken with a pinch of salt.

The anonymity afforded by eWOM – with artificial usernames rife – is its notorious Achilles heel. There is no way to know if reviews and ratings are motivated by commercial interest. And no way of assessing whether a post would ever be taken seriously if you could just get a glimpse of the person behind it.

Moreover, cWOM was time-sensitive. The fragments would be shared informally and disappear into the ether. A comment in the pub, a don’t-touch-that-brand-with-a-bargepole in the canteen, an intimate, shared recommendation between two friends. There and gone. If opinions subsequently changed, so did the timbre of the conversation.

Conversely, eWOM preserves stale hates or loves forever, digitally and ludicrously ossified long after their purpose has been served – and, if hostile, a permanent blemish on the target brand, whatever improvements it might since have made.

Most significant of all, marketers couldn’t eavesdrop on cWOM. That meant they couldn’t seek to influence it directly but, instead, had to do the much more helpful thing of keeping standards high in order to keep it positive.

The modern practice of hounding, cajoling or bribing customers to give positive digital feedback achieves the opposite effect by taking management eyes off the substantive product or service.

It could be argued, of course, that none of this really matters if digital user feedback is providing consumers with more balanced information and helping them reach better consumption decisions. The trouble is, the opposite is true.

In a 2016 study from the University of Colorado (see below), the team isolated average user ratings for 1,272 products in 120 categories and compared them with evidence-based quality scores from Consumer Reports (the US equivalent of Which?).

The Colorado Study

The paper, "Navigating by the Stars: Investigating the Actual and Perceived Validity of Online User Ratings", was published in the Journal of Consumer Research in 2016.

The team analysed a data set of 344,157 online user ratings for 1,272 products and compared them with objectively derived quality scores.

They discovered that, for any two randomly chosen products, the chance of the one with the higher user rating being the one with the higher objective quality score was just 57% – close to what would be achieved by chance.

The authors concluded that there is "a substantial disconnect between the objective quality information that online user ratings actually convey and the extent to which consumers trust them as indicators of objective quality".

What they discovered was astounding: the two are virtually uncorrelated. More crucially, average user ratings are not predictive of resale value (where relevant) – whereas Consumer Reports scores are. The tendency of consumers to "put enormous weight on the average user rating" is costing them money.

So who needs feedback? The feedback industry does – TripAdvisor, Yelp, Angie’s List and all the others that turn consumer opinion into corporate profit. Taken as a whole, I give this bunch a two-star score.

Here’s what they need to do to improve on that:

  • Warn on sample size. The US study noted that people are blind to the unreliability of small samples. The industry could use lighter-weight typography, or weaker icons, for average user ratings derived from a non-significant sample.
  • Banish anonymity. Or, at the very least, get people to declare that they have no commercial interest before letting them hold forth.
  • Cull ancient ratings and reviews.

That might get you guys up to a "three" rating. Still well short of the five-star splendour of the language of the marketplace and classical word of mouth.

Helen Edwards is a former PPA business columnist of the year, has a PhD in marketing, an MBA from London Business School and is a partner at Passionbrand

Topics

Before commenting please read our rules for commenting on articles.

If you see a comment you find offensive, you can flag it as inappropriate. In the top right-hand corner of an individual comment, you will see 'flag as inappropriate'. Clicking this prompts us to review the comment. For further information see our rules for commenting on articles.

comments powered by Disqus