18 September 2009

The reliability of user reviews

Technology Review reports Vassilis Kostakos' analysis of user reviews on Amazon, Internet Movie Database and BookCrossings. On all sites he found that a small core of users submitted multiple reviews (for example, only 5% of Amazon users submit more than 10 reviews). So reviews you read don't come from an exactly representative population (nor, of course, do the professional reviews in papers and magazines but, there, the expectation is different). However the impact of a core of active contributors seems to be a common issue for user-generated content.

The TR report mentions tools that can be used to 'frame' reviews for readers, for example, dating the review, giving user ratings of the helpfulness of the review. Kostakos suggests eliminating extreme reviews in either direction, to prevent a group effect of followers taking the lead from an extreme view, although Jahna Otterbacher (who studies on-line rating systems) suggests this might put people off contributing.

Some interesting comments beneath the main report, one of which points out that what is needed is research into the strategies people use in interpreting the reviews. I know I have many, and they vary according to the site I'm using. So, for example, using Trip Advisor I'll look at where the reviewer comes from to see whether they might carry the same cultural baggage as me. On Amazon I tend to look for any hint of a link between the reviewer and the author. A link isn't necessarily negative in my mind, indeed it can add to appreciation of the book: see this review of Keeping Mum by cousin of the author, Brian Thompson:
Although it may seem cheeky reviewing my cousin's book, I feel I must for several reasons...

Later
: ReadWriteWeb has also picked up this research, with more interesting comments.
[Via Putting People First]

No comments:

Post a Comment