Ratings by Communities Are Skewed—Now What?
18 Sep 2009

Ratings by Communities Are Skewed—Now What?

18 Sep 2009

Many online and mobile applications rely on ratings and reviews from their communities to provide wisdom for their remaining users. Services such as Yelp, Amazon, Digg, and even the Apple App Store use input from their users to evaluate some intrinsic value of a set of items—be they books or iPhone applications.  However, new research recently published in the MIT Technology Review suggests that the wisdom of crowds can be inaccurate and misleading. Does this cast doubt on the utility of community-driven rating systems?

Vassilis Kostakos, an adjunct assistant professor at Carnegie Mellon University and his team confirmed that the rating systems commonly used can “easily be swayed by a small group of highly active users.” The Technology Review article goes on to write that “rating systems can tap into the ‘wisdom of the crowd’ to offer useful insights, but they can also paint a distorted picture of a product if a small number of users do most of the voting.”

Although Professor Kostakos’ research validates a suspicion that many have had, it does not necessarily mean that community-based review systems are useless. The article states:

Jahna Otterbacher, an assistant professor at Illinois Institute of Technology who studies online rating systems, says that previous research has hinted that rating systems can be skewed by factors such as the age of a review. But she notes that some sites, including Amazon, already incorporate mechanisms designed to control the quality of ratings–for example, allowing users to vote on the helpfulness of other users’ reviews.

Kostakos proposes further ways to make recommendations more reliable. He suggests making it easier to vote, in order to encourage more users to join in.

What this means for the design of interactive products with such rating features is that steps should be taken to ensure a more representative outcome of user-driven reviews. The following factors can be considered to that end:

  • Count only one vote per user.
  • Provide a mechanism for users to vote on the usefulness of written reviews, and factor that into the total score.
  • Make it easier for all users to vote to capture a broader cohort.
  • Factor in the network patterns of user voting. For example, if a group of users consistently votes together on items, perhaps compensate in the algorithm for that behavior as it tends to skew results.

Leave a comment
More Posts

Leave a Reply