The digital world has overtaken its offline counterparts in most aspects—a major contributor to its growth being its economic efficiency.
The digital world has overtaken its offline counterparts in most aspects—a major contributor to its growth being its economic efficiency. Take online commerce: it promotes transparency, and ensures information symmetry and ease of doing business. It allows low entry barriers and unhindered expansion and reduces the concentration of power in the hands of one player. It draws the virtual market towards a perfect competition.
The trueness of this competition is, however, questionable. The rapid growth of online platforms is due to Big Data, Big Data Analytics and self-learning algorithms. Big Data refers to large datasets analysed computationally to reveal patterns, trends, associations, especially relating to human behaviour. Data Analytics is the ability to design algorithms that can access and analyse vast amounts of information.
These advancements could lead to ‘data advantage’ amongst competitors. The use of pricing algorithms and AI, and their effect on virtual competition is, thus, a policy concern. This is demonstrated by the Google case. The Competition Commission of India held that entities with access to data and the ability to use it to their advantage have a ‘special responsibility’ to ensure genuine competition in the market. While the Google case was one of ‘search bias’ and distortion of search rankings through manipulation of a self-learning algorithm, there are cases of collusion wherein the element of human intervention is elusive.
One such model of collusion is the hub-and-spoke. In the traditional model there is a hub, the main player(s), which coordinate(s) the activities of the other players (the ‘spokes’) collectively or individually. In the digital world, most market participants outsource pricing to a third party. This often leads to the use of the same pricing algorithm by businesses across the industry. Thus, a de facto algorithmic hub-and-spoke model arises, the hub being the provider of algorithmic pricing. Consequentially, prices tend to align and rise.
Another deterrent to virtual competition is ‘tacit collusion’. Often, pricing algorithms unilaterally operated by firms reach a common understanding based on common responses to market dynamics. While this is not the result of an express agreement, each player is aware of the use of such pricing algorithms by others. They, thus, facilitate tacit collusion or conscious parallelism. Such cases can be prosecuted premised only on the anticompetitive intent of firms.
This issue is exacerbated by the use of AI. The enhanced ability of computers to process huge amounts of data at real time speed can achieve a Godlike view of the market. This enables players to act in consonance to deter competition without any express agreement.
These developments beg a question: Can the use of similar algorithms to distort competition be penalised under competition law without any express illegal agreement?
The main challenge before competition authorities is to bring under scrutiny products of algorithm developers that unilaterally support tacit collusion. Competition agencies lack enforcement tools to do so. Such cases might be prosecuted under the banner of ‘unfair trade practice’ or on grounds of ‘anticompetitive intent’.
In the case of AI, there is isolation of the ‘human’ element from decision-making algorithms. With no express anticompetitive agreement or human interference, nobody can be held liable. An adverse impact on ‘consumer welfare’ seems an inevitable fallout of AI.
In cases of tacit collusion in digital markets, if market participants’ algorithms can attain a God’s view, then enforcers must consider the possibility of tacit collusion beyond price and highly concentrated industries. Competition authorities must find a way out lest they create the perception that large online platforms are above the law.
Are existing antitrust laws applicable to the current challenges in virtual competition?
In some cases, despite a theory of harm in place, it is difficult for antitrust agencies to establish a violation. For instance, the hub-and-spoke scenario could be brought under Section 3 of the Competition Act, 2002. However, where AI plays a role, instances and participants of collusion are difficult to identify. It is also tough to establish a clear market power in algorithm pricing.
A framework for healthy virtual competition, which promotes competition, advances consumer welfare and safeguards the privacy of the individual, is needed. A specific legislation on privacy to give individuals more power over their personal data can go a long way in this regard.
By Nidhi Singh & Shivani Swami.Singh is co-founder of BlackPearl Chambers (Advocates & Solicitors). Swami is a law student at National Law Institute University, Bhopal.