- Online reviews help consumers make purchasing decisions.
- However, because some judgments can be costly, online ratings can group around a high mean, reducing differentiation.
- Professor Richard T Watson of the University of Georgia, USA, and fellow researchers have identified and studied this phenomenon.
- They have developed a theory of information compression, which challenges conventional thinking that more data means better information.
- Information compression can have serious, organisational, and societal consequences.
It’s tempting to think that consumer ratings of the next item you buy online are a healthy guide to finding the best product, but the reality is different. The clustering of items rated 4.5 stars out of five may suggest a constellation of safer choices, but it is a false beacon, and has ramifications in an era when people increasingly make decisions from online information. A team of leading researchers in information systems and marketing has produced a theory about this phenomenon and discovered that, while the fallout of this for shoppers may just be a purchase that doesn’t work, the costs can be much more severe when ratings are applied elsewhere. What they’re proposing challenges some of the fundamental thinking among their peers.
For an organisation to operate properly it requires effective individual and group decision-making based on accurate information. The social and technical networks that collect, store, distribute, and analyse data for providing such information and knowledge are called information systems.
Professor Richard T Watson of the Terry College of Business at the University of Georgia, USA, is one of a number of researchers examining information systems who have noticed an intriguing anomaly – an increase in data, such as a stream of ratings, can actually reduce the quality of information. Working with colleague Professor Amrit Tiwana, and collaborators Professor Kirk Plangger of King’s Business School in London, and Professor Leyland Pitt of the Simon Fraser University in Vancouver, Watson has focused on how online publicly available ratings can compact around a high-scoring mean. The result of the reduced variation is that it hinders the ability to make an accurate decision. The researchers have given this phenomenon a name – information compression – and they provide some highly relevant examples of its negative social consequences. They theorise on its causes and effects.
The fallout from information compression
Abrupt changes to an information system’s state are excellent research opportunities. Generally, system changes are incremental, ensuring a state of equilibrium, but a significant, sometimes unexpected, change occurs every now and then, producing what’s called a punctuated equilibrium (Figure 2). A case in point for Watson and his team is the introduction of the online reviewing of physicians in the USA. Until online reviews, patients rated physicians through a ‘judgement network’ of neighbours and friends via word-of-mouth. The researchers have shown how concern for the backlash of negative online reviews has encouraged physicians to prescribe interventions their patients believed they need, such as antibiotics – even for non-bacterial infections – or powerful painkillers. The unintended consequences have been an increased risk of antibacterial resistance and the proliferation of opioid abuse in the USA.
Concern for the backlash of negative reviews encourages physicians to prescribe interventions their patients believed they need.
Using this example, Watson and his team’s theory of information compression looks like this: a change in the state of a judgement network, such as the introduction of online review of physicians, influences the expected judgement costs of an agent (physician) concerned with preserving their reputation or income, which changes that agent’s behaviour (over prescription), ultimately resulting in a convergence of ratings, information compression, and a negative societal outcome (Figure 3).
The nature of online rating systems exacerbates this. They often lack context – evidenced in the typical star-rating system – and are subject to multiple biases, measurement errors, and the restrictions of summarisation, such as reporting only the mean of many ratings. The outcome is known as a ‘positivity preference’, and amplifies this ‘compression’ around high scores. According to the researchers, this effect can be measured as a compression index (CI) between 0 and 1 – the higher the value, the greater the compression – which they developed.
Watson and his team have shown that more isn’t necessarily better.
Another example of information compression for Watson and his team lies in the fallout from auditing firms also offering consulting services. The fear of losing a client’s more lucrative non-auditing services encouraged some financial services firms to pay less attention to auditing their clients’ accounts. Instead of information emerging through responsible, ethical decisions, the result was information compression and distortion of accuracy – audits became less likely to differentiate firms based on their financial viability. The markets no longer had a clear idea of which companies were performing better than others. A substantial £10 million fine imposed by the UK’s financial watchdog, the Financial Reporting Council, on PwC (an audit and professional services firm) in 2018 exposed this practice when it highlighted that a recent audit failed to detect a major firm’s subsequent failure.
The other costs of information compression
Watson and his team’s theory of information compression should encourage urgent and serious consideration by other researchers in information systems. The field has implicitly assumed that the proliferation of data from online ratings provides valuable research information and guides better consumer decision-making. Watson and his team have shown that the opposite could be true: more is not necessarily better. Key professions concerned with reputational preservation are increasingly at the mercy of anonymous online ratings. Furthermore, social media can amplify judgement costs; these costs can be more than reputational – they can also be financial, psychological, opportunity, and privacy costs.
The theory of information compression implies that justificatory reasoning – exemplified by the scientific method, where facts and data support decision-making – may no longer have a place within information systems that pander to anonymous popular sentiment, and that when such sentiment determines the cost of judgements, moral codes can be compromised.
What impact could your theory of information compression have on information systems and marketing research?
Information systems and marketing researchers using rating systems to predict behaviour by applying statistical methods or machine learning should use our CI measure before proceeding. Rating systems with a high CI have little explanatory power.
How would you like to see other researchers develop or apply your theory?
We would like to see other researchers in a variety of fields apply our theory, because information compression is widespread (eg, Putin is likely getting highly compressed information) and develop interventions that reduce information compression. For example, a devil’s advocate is an intervention that can reduce information compression, provided the focus is on the message and not the advocate.
Where else could we see the negative effects of information compression within information systems?
Once we alerted ourselves to the phenomenon, we started to see information compression everywhere. In universities, we see grade inflation. In electricity pricing, we see flat residential tariffs when wholesale prices can be more than ten times higher. We read of political leaders who recruit acolytes who feed their ego rather than add to their decision quality.
Where could information compression be present in other organisational settings outside of information systems?
Any situation where there is a high level of agreement about a decision that involves more than objective facts is a candidate for information compression. Senior executives who always defer to the CEO’s opinion are compressing information. This is especially a problem when a CEO develops a reputation for firing or sidelining dissenters.
What is needed to minimise information compression?
Information compression occurs when a decision-maker expects their judgement to have a high personal cost. Minimising anticipated judgement costs is critical to reducing information compression. In an organisational setting, providing an anonymous channel for communication can lower judgement costs. Rating systems that focus on objective measures (eg, the hotel check-in took x minutes), rather than broad opinions (eg, I enjoyed my stay at hotel y) are likely to reduce information compression and provides superior diagnostic data for product or service improvement.